Many of the virus, adware, security, and crash problems with Windows occu when someone installs a driver of dubious origin. The driver supposedly provides some special feature for Windows but in reality makes Windows unstable and can open doors for people of ill intent who want your system for themselves. Of course, Microsoft’s solution is to lock down Windows so that you can use only signed drivers. A signed driver is one in which the driver creator uses a special digital signature to “sign” the driver software. You can examine this signature (as can Windows) to ensure that the driver is legitimate.
Windows 2008 doesn’t load a driver that the vendor hasn’t signed. Unfortunately, you’ll find more unsigned than signed drivers on the market right now. Vendors haven’t signed their drivers, for the most part, because the process is incredibly expensive and difficult. Many vendors see the new Windows 2008 feature as Microsoft’s method of forcing them to spend money on something that they dispute as having value. Theoretically, someone can forge a signature, which means that the signing process isn’t foolproof and may not actually make Windows more secure or reliable. Of course, the market will eventually decide whether Microsoft or the vendors are correct, but for now you have to worry about having signed drivers to use with Windows.
Sometimes, not having a signed driver can cause your system to boot incorrectly or not at all. The Disable Driver Signature Enforcement option lets you override Microsoft’s decision to use only signed drivers. When you choose this option, Windows boots as it normally does. The only difference is that it doesn’t check the drivers it loads for a signature. You may even notice that Windows starts faster. Of course, you’re giving up a little extra reliability and security to use this feature — at least in theory.
You can’t permanently disable the use of signed drivers in the 64-bit version of Windows Server 2008 — at least, not using any Microsoft-recognized technique. It’s possible to disable the use of signed drivers in the 32-bit version by making a change in the global policy (more on this technique later in the section). A company named Linchpin Labs has a product called Atsiv (http://www.linchpinlabs.com/resources/atsiv/usage-design.htm), which lets you overcome this problem, even on 64-bit systems. Microsoft is fighting a very nasty war to prevent people from using the product. (They recently asked VeriSign to revoke the company’s digital certificate and had the product declared malware; read more about this issue at http://avantgo.computerworld.com.au/avantgo_story.php?id=69104626.) doesn’t check the drivers it loads for a signature. You may even notice that Windows starts faster. Of course, you’re giving up a little extra reliability and security to use this feature — at least in theory.
Using the boot method of permanently disabling signed driver checking
An undocumented method of disabling the signed driver requirement for both 32-bit and 64-bit versions of Windows Server 2008 is to use the BCDEdit utility to make a change to the boot configuration. Because this feature isn’t documented, Microsoft could remove it at any time. This procedure isn’t something that a novice administrator should attempt to do, but it’s doable. The following steps describe the process:
1. Choose Start -> Programs -> Accessories.
You see the Accessories menu.
2. Right-click Command Prompt and choose Run As Administrator from the context menu. Windows opens a command line with elevated privileges. You can tell that the privileges are elevated because the title bar states that this is the administrator’s command prompt rather than a standard command prompt.
3. Type BCDEdit /Export C:\BCDBackup and press Enter. BCDEdit displays the message This Operation Completed Successfully. This command saves a copy of your current boot configuration to the C:\BCDBackup file. Never change the boot configuration without making a backup.
4. Type BCDEdit /Set LoadOptions DDISABLE_INTEGRITY_CHECKS and press Enter. BCDEdit displays the message This Operation Completed Successfully. The Driver Disable (DDISABLE) option tells Windows not to check the signing of your drivers during the boot process. Be sure to type the BCDEdit command precisely as shown. The BCDEdit utility is very powerful and can cause your system not to boot when used incorrectly. If you make a mistake, you probably have to use the technique described in the “Using the Command Prompt” section of this chapter to open a command prompt using your boot CD and then fix the problem by using the BCDEdit / Import C:\BCDBackup command. This technique modifies only the current boot configuration. If your server has multiple boot partitions, you must make this change for each partition individually.
5. Restart your system as normal to use the new configuration.
Using the group policy method of permanently disabling signed driver checking
Users of the 32-bit version of Windows Server 2008 also have a documented and Microsoft-approved method of bypassing the signing requirement. (This technique will never work on the 64-bit version of the product.) In this case, you set a global policy that disables the requirement for the local machine (when made on the local machine) or the domain (when made on the domain controller). The following steps describe how to use the Global Policy Edit (GPEdit) console to perform this task.
1. Choose Start -> Run.
You see the Run dialog box.
2. Type GPEdit.MSC (for Group Policy Edit) in the Open field and click OK. Windows displays the Local Group Policy Editor window.
3. Locate the Local Computer Policy\User Configuration\Administrative Templates\System\Driver Installation folder.
4. Double-click the Code Signing for Device Drivers policy.
5. Select Enabled.
6. Choose Ignore (installs unsigned drivers without asking), Warn (displays a message asking whether you want to install the unsigned driver), or Block (disallows unsigned driver installation automatically) from the drop-down list.
7. Click OK.
The Local Group Policy Editor console sets the new policy for installing device drivers.
8. Close the Local Group Policy Editor console.
9. Reboot the server.
Theoretically, the changes you made should take effect immediately after you log back in to the system. However, to make sure the policy takes effect for everyone, reboot the server.
Source of Information : For Dummies Windows Server 2008 All In One Desk Reference For Dummies
Monday, August 31, 2009 | 0 Comments
The following compliance best practices fall under six major categories. Each of the categories represents a step in a typical compliance process.
1. Scanning Code. The first step in the compliance process is usually scanning the source code, also sometimes called auditing the source code. Some common practices in this area include:
• Scanning everything—proprietary code, third-party software and even open-source software, because your team might have introduced modifications triggering the need for additional due diligence and additional obligations to fulfill.
• Scan early and often—scan as early in the development process and as often as possible to identify new packages entering your build.
• Scan newer versions of previously approved packages— in the event that a previously approved packaged was modified, you should rescan it to ensure that any code added to it does not have a conflicting license and that there are no additional obligations to meet.
2. Identification and Resolution of Flagged Issues. After scanning the source code, the scanning tool generates a report that includes a “Build of Material”, an inventory of all the files in the source code package and their discovered licenses, in addition to flagging any possible licensing issues found and pinpointing the offending code. Here’s what should happen next:
• Inspect and resolve each file or snippet flagged by the scanning tool.
• Identify whether your engineers made any code modifications. Ideally, you shouldn’t rely on engineers to remember if they made code changes. You should rely on your build tools to be able to identify code changes, who made them and when.
• When in doubt of the scan results, discuss it with Engineering.
• If a GPL (or other) violation is found, you should report to Engineering and request a correction. Rescan the code after resolving the violation to ensure compliance.
• In preparation for legal review, attach to the compliance ticket all licensing information (COPYING, README, LICENSE files and so on) related to the open-source software in question.
3. Architecture Review The architecture review is an analysis of the interaction between the open-source code and your proprietary code. Typically, the architecture review is performed by examining an architectural diagram that identifies the following:
• Open-source components (used as is or modified).
• Proprietary components.
• Components’ dependencies.
• Communication protocols.
• Linkages (dynamic and static).
• Components that live in kernel space vs. userspace.
• Shared header files.
The result of the architecture review is an analysis of the licensing obligations that may extend from the open-source components to the proprietary components.
4. Linkage Analysis. The purpose of the linkage analysis is to find potentially problematic code combinations at the dynamic link level, such as dynamically linking a GPL library to proprietary source code component. The common practices in this area include:
• Performing dynamic linkage analysis for each package in the build.
• If a linkage conflict is identified, report to it Engineering to resolve.
• Redo the linkage analysis on the updated source code to verify that the code changes introduced by Engineering resolved the linkage issue.
As for static linkages, usually companies have policies that govern the use of static linkages, because it combines proprietary work with open-source libraries into one binary. These linkage cases are discussed and resolved on a case-by-case basis.
The difference between static and dynamic linking to highlight the importance of identifying how open-source license obligations can extend from the open-source components (libraries, in this example) to your proprietary code through the linking method.
5. Legal Review. The best practices of the legal review include:
• Review the report generated by the scanning tool attached to the compliance ticket.
• Review the license information provided in the compliance ticket.
• Review comments left in the compliance ticket by engineers and OSRB members.
• Flag any licensing conflict and reassign compliance ticket to Engineering to rework code if needed.
• Contact the open-source project when licensing information is not clear, not available or the code is licensed under more than one license with unclear terms/conditions.
• Decide on incoming and outgoing license(s)
6. Final Review. The final review is usually an OSRB faceto- face meeting during which open-source software packages are approved or denied usage. A good practice is to record the minutes of the meeting and the summary of the discussions leading to the decisions of approval or denial. This information can become very useful when you receive compliance inquiries. For approved open-source packages, the OSRB would then compile the list of obligations and pass it to appropriate departments for fulfillment.
Source of Information : Linux Journal 185 September 2009
Sunday, August 30, 2009 | 0 Comments
Several departments are involved in ensuring open-source compliance. Here’s a generic breakdown of the different departments and their roles in achieving open-source compliance:
• Legal: advises on licensing conflicts, participates in OSRB reviews, and reviews and approves content of the opensource external portal.
• Engineering and product team: submits OSRB requests to use open-source software, participates in the OSRB reviews, responds promptly to questions asked by the compliance team, maintains a change log for all open-source software that will be made publicly available, prepares source code packages for distribution on the company’s open-source public portal, integrates auditing and compliance as part of the software development process checkpoints, and takes available open-source training.
• OSRB team: drives and coordinates all open-source activities, including driving the open-source compliance process; performs due diligence on suppliers’ use of open source; performs code inspections to ensure inclusion of open-source copyright notices, change logs and the like in source code comments; performs design reviews with the engineering team; compiles a list of obligations for all open-source software used in the product and passes it to appropriate departments for fulfillment; verifies fulfillment of obligations; offers open-source training to engineers; creates content for the internal and external open-source portals; and handles compliance inquiries.
• Documentation team: produces open-source license file and notices that will be placed in the product.
• Supply chain: mandates third-party software providers to disclose open-source software used in what is being delivered.
• IT: supports and maintains compliance infrastructure, including servers, tools, mailing lists and portals; and develops tools that help with compliance activities, such as linkage analysis.
Source of Information : Linux Journal 185 September 2009
Saturday, August 29, 2009 | 0 Comments
Before you can share your media library content, you have to do some configuration. First, the PC must be connected to your home network, and you must have already configured the PC’s network connection to access your network as a private network. If you haven’t done this, here’s the quickest way.
Right-click the network connection icon in the system tray and choose Network and Sharing Center. Then, in the Network and Sharing Center window that appears, click Customize below the network map and next to your home network (which will typically have a name like Network). In the Set Network Location dialog box that appears, choose Private for location type and click Next. Then click Close. (Make sure you do this for your home network only, and not for any public networks you might visit.)
Please note that you need to repeat this process on any other Windows Vista– based PCs with which you’d like to share media libraries. This step isn’t required for Windows XP.
Next you need to configure Windows Media Player 11 for sharing. To do so, open the Media Player, open the Library menu (by clicking the small arrow below the Library toolbar button), and choose Media Sharing. This will display the Media Sharing dialog box.
In Media Sharing, select the check box Find Media that Others are Sharing if you’d like to find other shared music libraries on your home network. If you want to share the music library on the current PC, select the Share My Media check box, and then examine the icons that represent the various PCs and devices that you can share with. Select each in turn and click the Allow button for the devices with which you’d like to create sharing relationships. As you allow devices, a green check box will appear on their icons, (without the color, of course).
If you’d like to specify the type of content you want to share, click the Settings button. You can choose between music, pictures, and videos, and choose whether to filter via star ratings or parental ratings.
Source of Information : Wiley Windows Vista Secrets SP1 Edition
Friday, August 28, 2009 | 0 Comments
You can’t use an iPod natively because Microsoft knows that if it did the engineering work to make it happen, Apple would simply launch an antitrust lawsuit. Given this limitation, you might think that getting an iPod to work with Windows Media Player 11 is a non-starter, but as it turns out, an enterprising third-party company, Mediafour (www.mediafour.com/), makes an excellent solution called XPlay that adds iPod compatibility to Windows Media Player. XPlay makes the iPod work just like any other portable music device in Windows Media Player.
What about content purchased from Apple’s online store, the iTunes Store? After all, Apple offers an unparalleled number of digital songs and albums, TV shows, movies, audio books, music videos, and other content from the iTunes Store. Unfortunately, because most of this content is protected in some manner with Digital Rights Management (DRM) technology and encoded in Media Player–unfriendly formats, there’s no easy answer; indeed, to date, no one has provided a way to deprotect video content sold via iTunes. But music is a bit simpler. Apple offers some music in nonprotected AAC format. This music won’t work in Windows Media Player, but you can use Apple’s desktop-based iTunes software to convert it to MP3 format, which works fine. The protected songs, also sold in AAC format, are a bit more problematic: You’ll have to burn the songs to CD and then manually re-rip them back to the PC in MP3 format. Aside from the sheer effort involved, this method isn’t particularly elegant because Apple’s low-quality 128 Kbps AAC tracks aren’t great source material: The resulting MP3 files will likely be hissy, tinny, or otherwise thin sounding. My advice is to avoid purchasing music from iTunes and choose a better solution, such as the Amazon MP3 store.
Source of Information : Linux Journal 185 September 2009
Thursday, August 27, 2009 | 0 Comments
Moving forward to Windows Vista, the focus from a networking standpoint was to make things as simple as possible while keeping the system as secure and reliable as possible as well. At a low level, Microsoft rewrote the Windows networking stack from scratch in order to make it more scalable, improve performance, and provide a better foundation for future improvements and additions. Frankly, understanding the underpinnings of Vista’s networking technologies is nearly as important as understanding how your car converts gasoline into energy. All you really need to know is that things have improved dramatically under the hood.
Here are some of the major end-user improvements that Microsoft has made to Windows
• Network and Sharing Center: In previous versions of Windows, there wasn’t a single place to go to view, configure, and troubleshoot networking issues. Windows Vista changes that with the new Network and Sharing Center, which provides access to new and improved tools that take the guesswork out of networking.
• Seamless network connections: In Windows XP, unconnected wired and wireless network connections would leave ugly red icons in your system tray, and creating new connections was confusing and painful. In Vista, secure networks connect automatically and an improved Connect To option in the Start Menu provides an obvious jumping-off point for connecting to new networks.
• Network Explorer: The old My Network Places explorer from previous versions of Windows has been replaced and upgraded significantly with the new Network Explorer. This handy interface now supports access to all of the computers, devices, and printers found on your connected networks, instead of just showing network shares, as XP did. You can even access network-connected media players, video game consoles, and other connected device types from this interface.
• Network Map: If you are in an environment with multiple networks and network types, it can be confusing to know how your PC is connected to the Internet and other devices, an issue that is particularly important to understand when troubleshooting. Vista’s new Network Map details these connections in a friendly graphical way, eliminating guesswork.
• Network Setup Wizard: If you’re unsure how to create even the simplest of home networks, fear not: Windows Vista’s improved Network Setup Wizard makes it easier than ever thanks to integration with Windows Rally (formerly Windows Connect Now) technologies, which can be used to autoconfigure network settings on PCs and compatible devices. This wizard also makes it easy to configure folder sharing (for sharing documents, music, photos, and other files between PCs) and printer sharing.
• Folder and printer sharing: The model for manually sharing folders between PCs has changed dramatically in Windows Vista, but Microsoft has intriguingly retained an alternate interface that will be familiar to those who are adept at setting up sharing on XP-based machines. I’ll show you why this type of folder sharing is, in fact, easier to set up than Vista’s new method. Printer sharing, meanwhile, works mostly like it did in XP.
Source of Information : Wiley Windows Vista Secrets SP1 Edition
Wednesday, August 26, 2009 | 0 Comments
Companies face several challenges as they start creating the compliance infrastructure needed to manage their open-source software consumption. The most common challenges include:
1. Achieving the right balance between processes and meeting product shipment deadlines. Processes are important; however, they have to be light and efficient, so they’re not regarded as an overhead to the development process and to avoid making engineers spend too much time on compliance activities.
2. Think long term and execute short term: the priority of all companies is shipping products on time, while also building and expanding their internal open-source compliance infrastructure. Therefore, expect to build your compliance infrastructure as you go, doing it the right way and keeping scalability in mind for future activities and products.
3. Establish a clean software baseline. This is usually an intensive activity over a period of time. The results of the initial compliance activities include a complete software inventory that identifies all open-source software in the baseline, a resolution of all issues related to mixing proprietary and open-source code, and a plan for fulfilling the license obligations for all the open-source software.
Building a Compliance Infrastructure
Here are the essential building blocks of an open-source compliance infrastructure required to enable open-source compliance efforts
• Open-source review board (OSRB): comprises representatives from engineering, legal and open-source experts. The OSRB reviews requests for use, modification and distribution of open-source software and determines approval. In addition, the OSRB serves as a steering committee to define and manage your company’s open-source strategy.
• Open-source compliance policy: typically covers usage, auditing and post-compliance activities, such as meeting license obligations and distribution of open-source software. Usual items mandated in a compliance policy are approval of OSRB for each piece of open-source software included in a product, ensuring that license obligations are fulfilled prior to customer receipt, mandatory source code audits, mandatory legal review and the process and mechanics of distribution.
• Open-source compliance process: the work flow through which a request to use an open-source component goes before receiving approval, including scanning code, identifying and resolving any flagged issues, legal review and the final decision. See HP’s “FOSS Management Issues” article at www.fsf.org/licensing/compliance for an example of a compliance process.
• Compliance project management tool: some companies use bug-tracking tools that already were in place, and other companies rely on professional project management tools. Whatever your preference is, the tool should reflect the work flow of your compliance process, allowing you to move compliance tickets from one phase of the process to another, providing task and resource management, time tracking, e-mail notifications, project statistics and reporting.
• Open-source inventory management: it is critical to know what open-source software is included for each product, including version numbers, licensing information, compliance information and so on. Basically, you need to have a good inventory of all your open-source assets—a central repository for open-source software that has been approved for deployment. This inventory is handy for use by engineering, legal and OSRB.
• Open-source training: ensures that employees have a good understanding of your company’s open-source policies and compliance practices, in addition to understanding some ofthe most-common open-source licenses. Some companies go one step further by mandating that engineers working with open-source software take open-source training and pass the evaluation.
• Open-source portals: companies usually maintain two opensource portals: an internal portal that houses the open-source policies, guidelines, documents, training and hosts a forum for discussions, announcements, sharing experiences and more; and an external portal that is a window to the world and the Open Source community and a place to post all the source code for open-source packages they use, in fulfillment of their license obligations with respect to distribution.
• Third-party software due diligence: you should examine software supplied to you by third parties carefully. If third-party software includes open-source software, ensure that license obligations are satisfied, because this is your responsibility as the distributor of a product that includes open-source software. You must know what goes into all of your product’s software, including software provided by outside suppliers.
Source of Information : Linux Journal 185 September
Tuesday, August 25, 2009 | 0 Comments
Traditionally, platforms and software stacks were built using proprietary software and consisted of various software building blocks that came from different companies with negotiated licensing terms. The business environment was predictable, and potential risks were mitigated through license and contract negotiations with the software vendors. In time, companies started to incorporate open-source software in their platforms for the different advantages it offers (technical merit, time to market, access to source code, customization and so on). With the introduction of open-source software to what once were purely proprietary software stacks, the business environment diverged from familiar territory and corporate comfort zones. Open-source software licenses are not negotiated agreements. No contracts are signed with software providers (that is, open-source developers). Companies now must deal with dozens of different licenses and hundreds or even thousands of licensors and contributors. As a result, the risks that used to be managed through license negotiations now must be managed through compliance and engineering practices.
Enter Open-Source Compliance
Open-source software initiatives provide companies with a vehicle to accelerate innovation through collaboration with a global community of open-source developers. However, accompanying the benefits of teaming with the Open Source community are very important responsibilities. Companies must ensure compliance with applicable open-source license obligations. Open-source compliance means that open-source software users must observe all copyright notices and satisfy all license obligations for the open-source software they use. In addition, companies using open-source software in commercial products, while complying with the terms of open-source licenses, want to protect their intellectual property and that of third-party suppliers from unintended disclosure. Open-source compliance involves establishing a clean baseline for the software stack or platform code and then maintaining that clean baseline as features and functionalities are added. Failure to comply with open-source license obligations can result in the following:
• Companies paying possibly large sums of money for breach of open-source licenses.
• Companies being forced by third parties to block product shipment and do product recalls.
• Companies being mandated by courts to establish a more rigorous open-source compliance program and appoint an “Open-Source Compliance Officer” to monitor and ensure compliance with open-source licenses.
• Companies losing their product differentiation and intellectual property rights protection when required to release source code (and perceived trade secrets) to the Open Source community and effectively license it to competitors royalty-free.
• Companies suffering negative press and unwanted public scrutiny as well as damaged relationships with customers, suppliers and the Open Source community.
There are three main lessons to learn from the open-source compliance infringement cases that have been made public to date:
1. Ensure that your company has an open-source management infrastructure in place. Open-source compliance is not just a legal exercise or merely checking a box. All facets of a company typically are involved in ensuring proper compliance and contributing to the end-to-end management of open-source software.
2. Make open-source compliance a priority before a product ships. Companies must establish and maintain consistent open-source compliance policies and procedures and ensure that open-source license(s) and proprietary license(s) amicably coexist well before shipment.
3. Create and maintain a good relationship with the Open Source community. The community provides source code, technical support, testing, documentation and so on. Respecting the licenses of the open-source components you use is the minimum you can do in return.
Source of Information : Linux Journal 185 September 2009
Monday, August 24, 2009 | 0 Comments
From a development standpoint, Google noted the difficulty in making this user experience acceptable on platforms with very different capabilities and conventions. Rather than just doing a brute-force port, the Google Chrome team has focused on often taking a step back from the code and looking at the larger picture of what a certain part of the code accomplishes for the user and then translated that into more abstract benefits for the respective Linux, Mac OS or Windows user. On some platforms, native capability exists in whole or in part for core functionality, such as sandboxed processes, but not on others. This fact has required a wide range of refactoring or writing new code depending on existing functionality found on the respective platform. One example of making Google Chrome good on the Mac platform is what the company did with WebKit. The team first had to come to terms with what it meant to use WebKit for Chrome and determine what it could provide. Interestingly, Google says that in the examples of Chrome or Safari, only about half the code is WebKit. In addition, WebKit was never really designed to be run in a separate process from the rest of the browser UI. In order to accomplish this, Google had to write much of its own drawing and event handling “plumbing” rather than simply dropping a WebView into a window in Interface Builder. However, the developers have been able to draw on much of the work that was done for the Windows version to solve this problem.
Of course, Google Chrome’s entire development process is much more efficient and potent given its opensource nature. More important than trying to “win the browser war” in the traditional sense—that is, get people to use Google Chrome as their primary browser—the company feels its opensource efforts with Chrome already have stimulated and seeded a great deal of innovation and made other browsers better than they would have been in Google Chrome’s absence. In fact, Google takes at least some credit for speed improvements and security enhancements that have taken place in other browsers during the past year, which is advantageous for everyone.
Given that Google Chrome is open source, we were curious to know how involved outside developers have been to its development. Although my contacts were unable to give me specific numbers, I was told that outside participation is very high, especially in terms of bug reports from users of the early developer builds of the browser. Google also works very closely with the WebKit team, so changes made by WebKit developers at Apple or others in the WebKit community are integrated into Google Chrome as well.
And now, on to the interview with Evan Martin and Mads Ager.
Source of Information : Linux Journal 185 September
Sunday, August 23, 2009 | 0 Comments
Another way to improve performance on systems with 2GB or less of RAM is to use a new Windows Vista feature called ReadyBoost. This technology uses spare storage space on USB-based memory devices such as memory fobs to increase your computer’s performance. It does this by caching information to the USB device, which is typically much faster than writing to the hard drive. (Information cached to the device is encrypted so it can’t be read on other systems.)
There is a number of caveats to ReadyBoost. The USB device must meet certain speed characteristics or Vista will not allow it to be used in this fashion. Storage space that is set aside on a USB device for ReadyBoost cannot be used for other purposes until you reformat the device; and you cannot use one USB device to speed up more than one PC. (Likewise, you cannot use more than one ReadyBoost device on a single PC.) In my testing, ReadyBoost seems to have the most impact on systems with less than 1GB of RAM, and it clearly benefits notebooks more than desktops, as it’s often difficult or impossible to increase the RAM on older portable machines.
When you insert a compatible USB device into a Windows Vista machine, you will see a Speed Up My System option at the bottom of the Auto Play dialog that appears. When you select this option, the ReadyBoost tab of the Properties dialog of the associated device will appear, enabling you to configure a portion of the device’s storage space. It recommends the ideal amount based on the capacity of the device and your system’s RAM.
Obviously, ReadyBoost won’t work unless the USB memory key is plugged into your PC. This can be a bit of a hassle because you need to remember to keep plugging it in every time you break out your portable computer. Still, ReadyBoost is a great enhancement and a welcome feature, especially when a PC would otherwise run poorly with Windows Vista.
inexpensive and simple way to
boost performance on
Saturday, August 22, 2009 | 0 Comments
A long time ago, in an operating system far, far away, the PCs of a bygone era had woefully inadequate amounts of RAM, and the versions of Windows used back then would have to regularly swap portions of the contents of RAM back to slower, disk-based storage called virtual memory. Virtual memory was (and still is, really) an inexpensive way to overcome the limitations inherent to a low-RAM PC; but as users ran more and more applications, the amount of swapping would reach a crescendo of sorts as a magical line was crossed and performance suffered.
Today, PCs with 2 to 4GB of RAM are commonplace, so manually managing Windows Vista’s virtual memory settings is rarely needed. That said, you can do so if you want, though you’ll have to navigate through a stupefying number of windows to find the interface:
1. Open the Start Menu, right-click on Computer, and choose Properties.
2. In the System window that appears, click the Advanced System Settings link in the Tasks list on the left.
3. In the System Properties window that appears, navigate to the Advanced tab and click the Settings button in the Performance section.
4. In the Performance Options dialog that appears, navigate to the Advanced tab and click the Change button. (Whew!)
By default, Vista is configured to automatically maintain and manage the paging file, which is the single disk-based file that represents your PC’s virtual memory. Vista will grow and shrink this file based on its needs, and its behavior varies wildly depending on how much RAM is on your system: PCs with less RAM need virtual memory far more often than those with 4GB of RAM (or more with 64-bit versions of Vista). While I don’t generally recommend screwing around with the swap file, Vista’s need to constantly resize the paging file on low-RAM systems is one exception. The problem with this behavior is that resizing the paging is a resource-intensive activity that slows performance. Therefore, if you have less than 2GB of RAM and can’t upgrade for some reason, you might want to manually manage virtual memory and set the paging file to be a fixed size—one that won’t grow and shrink over time.
To do this, uncheck the option titled Automatically Manage Paging File Sizes for All Drives and select Custom Size. Then determine how much space to set aside by multiplying the system RAM (2GB or less) by 2 to 3 times. On a PC with 2GB of RAM, for example, you might specify a value of 5120 (where 2GB of RAM is 2,048MB, times 2.5). This value should be added to both the Initial Size and Maximum Size text boxes to ensure that the page file does not grow and shrink over time.
Source of Information : Wiley Windows Vista Secrets SP1 Edition Oct 2008
Friday, August 21, 2009 | 0 Comments
Zombies and raids were the order of the day (and night) at InfernaLAN, but it was Joshua Sniffen’s Seraphim that bathed the Intel cafeteria in DuPont, Wash., in a heavenly light. Its beauty and craftsmanship smote the competition hip and thigh. This 3.5GHz Core i7 mod started out as a Lian-Li PC-V351 micro-ATX cube. Even with the extra room this chassis afforded Joshua over the PC-V300, he still needed to make room for a dualpump watercooling setup, cold cathodes, and an EVGA GeForce GTX 260 Core 216 Superclocked, not to mention a beautiful, custom Plexiglas GPU water block he created. Blessed with a Dremel, he proceeded to make the interior of the Lian-Li more holy. Still, Joshua needed the patience of Job to install everything. He etched the amazing Armored Angel drawing by James Dies into his transparent GPU water block cover. To dress the tool marks around the edges of the block—or rather his third, as the first two blocks cracked—he touched them up with a pillar of propane fire. A custom LED harness gave this angel its halo. Next, he mounted the graven image on a Koolance CPU-350 cooling block, which necessitated the second pump. The copper radiator was a new, unreleased model from Koolance. (Hey, it pays to have connections with one of the Chosen.) Joshua nearly had to strike a Faustian bargain to install the big pixel-pusher, and the integration of the cooling system proved so devilish that he had to snake some of the hoses into place using chopsticks. Joshua customized and/or fabricated the wiring harnesses to be easily hidden and no longer than necessary. He took the same approach with the lengths of the coolant hoses. Another modification was to move the drive mounts ahead to allow an 80mm fan to fit flush with the rear panel. Rubber grommets keep the SATA drives from passing any vibrations to the chassis. Finally, a cunningly attached bay cover completely stealths the Blu-ray drive yet ejects the tray with the touch of a finger. Joshua credits Christopher Jahosky and James Dies for the artwork on the outside of the case, which he lovingly rendered in vinyl. And we couldn’t help but credit Joshua for hand-crafting the top mod in CPU’s InfernaLAN competition.
Source of Information : CPU Magazine 07 2009
Thursday, August 20, 2009 | 0 Comments
Most people don’t understand the basics behind how their computers boot up when they first turn them on. Most Windows users think that when you start up your system, it runs NTLDR, which in turn reads the BOOT.INI configuration file. While this is the most obvious and observable way most of us understand the PC startup process, there’s a lot more going on underneath. And it’s in the nuts and bolts that Linux differs significantly from Windows when a system starts up.
What Everybody Does
When you press the power button on your system, it initializes all hardware devices and goes through a series of tests, commonly known as POST. These tests naturally run before any operating system is loaded. Once the POST is finished, the hardware looks for the first available bootable device on the available hardware. Systems today let you boot from a CD, hard removable device, or even network devices. A bootable device is something where the first 512 bytes contain boot information about the actual device itself. This is known as the boot sector of the device; for hard drives, it resides in the MBR (master boot record), which is before the partition information. Once the system finishes its POST, it reads the data in the MBR and then executes the code. Up until this point the Windows and Linux boot up procedures are identical. What happens next depends not only on what operating system you use, but also, in the case of Linux, which boot loader you use. With recent Windows installations, the MBR code will load NTLDR, which reads the BOOT.INI configuration file. This file contains information for different installations of Windows, as well as parameters to be passed to those installations. A Linux user has two popular boot loaders available: LILO (Linux Loader) and GNU GRUB. LILO is one of the earliest Linux boot loaders and is rarely used in newer Linux installations. Most modern Linux distributions use GNU GRUB, or GRUB for short, as their boot loader, due to its ease of use and configurability.
Boot From LILO
But let’s go back to the beginning and cover how LILO works, because a lot of legacy systems still have it. As with most boot loaders, LILO is much too big to fit into 512 bytes and so is loaded in stages. The first stage is when the initial 512 bytes are loaded from the boot sector on the hard drive. The purpose of this stage is to get things started and to load the next stage. LILO gives you a status of what stage it’s in based on how many letters in its name it’s shown. The first “L” indicates that the first stage has loaded. Just before LILO is ready to go onto the loader in the next stage, it displays an “I.” If LILO hangs with just an “LI,” it typically means that the file that LILO is expecting can’t be found. The disk might be damaged or perhaps the file has moved. But assuming everything’s fine, the secondary loader is executed and the second “L” will appear. The second stage boot loader for LILO is where the guts of the boot process occur. Here the secondary stage loader reads in the Lilo.conf file, which tells LILO where its various parts are and different boot options. The second stage loader also reads in what is known as a map file. The map file contains a collection of locations for the bootable partitions, including each partition’s respective disk. When LILO has successfully loaded the map file, it’ll display the “O” in its name and you’ll see a “boot:” prompt. At the “boot:” prompt, simply type the name of the operating system you want to run and press ENTER. If you’re not sure what OSes are on your system, press the TAB key. If you don’t make a selection after a little while, it’ll boot the first Linux OS it can find. If you want to change the display name of the available operating systems, or the timeout to boot the default OS, you’ll need to modify LILO’s configuration file, /etc/lilo.conf. LILO depends very heavily on the configuration file for all of its bootup information, so any changes to Lilo.conf can potentially make it impossible to boot next time. But don’t worry about making changes to Lilo.conf; there’s a GUI utility to help you with those things. If you’re using the KDE window manager, click the KDE icon and then select Control Center. Expand the System Administration heading and then select Boot Manager (LILO). This will give you a handy graphical utility with which to manipulate the Lilo.conf file. You can add, delete, and edit entries in the Lilo.conf file, and it’ll handle the grunt work of making it work.
In 1995 the Free Software Foundation created a specification on how boot loaders should interact with operating systems. To demonstrate the specification in action, GNU GRUB, or simply GRUB for short, was created. Like LILO, GRUB is too big to fit into 512 bytes and thus loads in stages. GRUB’s stage 1 loader is functionally identical to the first stage of LILO boot loader: Its goal is to load the next stage. But what happens next with GRUB depends on what type of operating system the stage 2 loader sits on. Most Linux flavors default to using the ext2 or ext3 file systems when laying out a disk. In these cases the GRUB stage 1 loader will load up the stage 2 loader without any problems. If you have Linux loaded on some other file system, such as xfs, jfs, fat, or reiserfs, then the GRUB stage 1 loader will load the stage 1.5 loader. The only purpose of the stage 1.5 loader is to act as a bridge between the stage 1 and stage 2 loaders. That is, the GRUB stage 1.5 loader knows enough about the underlying file system to run the stage 2 loader. Once the GRUB stage 1.5 loader has run, it’ll load the GRUB stage 2 loader. The GRUB stage 2 loader is where most of the action happens with GRUB, and it’s what most people think is GRUB. Here, the user is given a text menu with a list of available operating systems, and whichever one he chooses gets loaded. GRUB offers a number of improvements over LILO, such as the ability to specify an unlimited number of operating systems and boot from a network device. But perhaps best of all is that, unlike LILO, you can easily modify the GRUB configuration file without having to reinstall GRUB or play around with the boot sector. Because GRUB is newer, it has its own set of GUI tools to help modify its configuration file. A popular one is the KGRUBEditor, which requires the KDE 4 libraries to run. When you run it, all of the bootable entries in GRUB’s initial text menu are shown. As with the KDE Boot Manager for LILO, you can add, delete, or edit entries, and it’ll take care of the configuration files themselves.
Still Not In Linux
So after we’ve run either LILO or GRUB through its paces to boot the operating system, the Linux kernel is loaded, right? Wrong. Linux is designed to try and run on as many different systems as possible. It does this by having a large library of modules to enable different features. But due to the sheer number and size of modules, they can’t all be available at boot time. Also, Linux has some special features that require special work before they can be fully utilized, like booting off a network device. To address these situations, after you’ve gone through all of the boot loaders, you haven’t really loaded Linux yet. Instead, you will actually boot into a very small Linux kernel with just enough drivers to get the real Linux kernel started. This small Linux kernel is known as the initial ramdisk, or initrd for short. Once initrd has loaded, it moves itself out of the way, loads the real Linux kernel, and then deletes itself from memory. Only after initrd has freed itself from memory have you truly booted into Linux. Most people have probably never thought about how Linux boots up, so they’ve been stuck if it doesn’t boot. But after reading this, if your Linux box doesn’t boot, you at least now know where to go looking to troubleshoot the problem.
Source of Information : CPU Magazine 07 2009
Wednesday, August 19, 2009 | 0 Comments
As manufacturers rose to meet the demand for faster processors, chip performance was eventually hampered by extreme temperatures. The solution became multicore processors. Of course, we’ve developed our own cooling methods, using water, liquid nitrogen, or specialty coolants, to bring down their CPUs’ temps. Dr. Rama Venkatasubramanian, a senior researcher at RTI International, and a team of other researchers at Intel and Arizona State University, have stepped into this hot topic with a unique answer: micro-refrigerators that targetcool the hot spots. The researchers are looking at the brink of those multicores potentially facing similar thermal issues as in days of yore, with traditional heatsinks and fans still lacking the oomph to bring down temps of densely packed circuitry. “Our superlattice thin-film thermoelectric micro-refrigerators would help solve the problem of efficient thermal management of so-called hot spots on a high-performance chip,” says Venkatasubramanian. “Typical chips have highly non-uniform thermal maps, where some areas of intense computation run much hotter than others. “We can selectively cool these hot spots with our active cooling while the rest of the area is managed with standard heat removal means.” The micro-fridges are nano-scale (roughly 10 microns) and can be mounted on chips. Remarkably efficient, the micro-fridges essentially cool on demand and only use 2 to 3W when in use. Venkatasubramanian thinks a finished product could be available in 2011.
Source of Information : CPU Magazine 07 2009
Tuesday, August 18, 2009 | 0 Comments
Application As Service
There are plenty of ways to make programs start automatically when Windows loads, but most of them offer no control beyond that. Application As Service changes that by letting you treat any program as a Windows system service, providing all of the advanced configurability and other features that system services enjoy. As an example, say you use a backup program that you want to run on a regular basis, but other people use your computer and often shut it down accidentally. Using AaS, you can set up the program so it launches automatically when Windows loads, runs when you want it to run, and automatically restarts itself if some bonehead tries to kill it. And those are just some of the things this useful software allows.
Launching the software displays all of your Windows system services, as well as a separate list of Eltima Services you can add to manually. Clicking the Create button lets you establish a new service for any program installed on the PC, and you don’t have to navigate to the executable. Select the program’s shortcut, and the software automatically fills in the entire path to the .EXE. Once AaS creates a service, the GUI provides access to a staggering number of options, and even more are available via the command line interface. You can easily bind the service to a particular CPU or core, assign dependencies so it starts in the proper order if it relies on another service, establish environmental variables, and determine how the service reacts when the computer loses power or is rebooted. It’s even possible to close pop-up windows the service may generate when it loads, and full scheduling is easy to implement with the GUI. The software also has functionality to manage services on other PCs remotely, and you can password-protect the software itself so nobody can modify your settings or remove a program from the service list. The price is a bit steep for casual users, but if you manage a lot of PCs or certain programs that run on your machine are critical, AaS provides a convenient, powerful way to manage them that goes well beyond the Startup folder.
Source of Information : CPU Magazine 07 2009
Monday, August 17, 2009 | 0 Comments
Security software is something that most users liken to eating vegetables: We know they’re good for us, but they can leave a bad taste in our mouths. A few years ago, security software developers started adding extra layers of security to what they simply used to call their antivirus programs, generally adding so much bloat, complexity, and system slowness that users start swearing off particular vendors and their products. Believe it or not, we’re happy to report that the times have changed. The vegetables are tasting better. Two developments account for these improvements. The first was massive user revolt: Users directed their rage at security software vendors, and the vendors have listened, spending serious manpower on performance optimizations to keep computers spry. The second basically boils down to the availability of fast, cheap hardware. A $500 computer bought today is five or 10 times faster than a $1,400 computer bought three or four years ago, and a $1,500 computer bought today might as well be a 5-year-old supercomputer. In other words, the modern computers most CPU readers have are finally capable of good performance, even while running security software.
How We Tested
We had several conversations on online gaming forums to get a sense of what power users’ concerns are with security software, and the results were intriguing. General slowness due to background tasks is always a concern, but scheduled background scans and update downloads occurring during gaming, movie viewing, or other periods when performance is important is a big problem, too, so we focused on these areas first. There was a general assumption among the forum community that security effectiveness and ease of use were similar among competitors, so we checked them all out against viruses, spyware, legitimate servers, and illegitimate worms. Most users wanted simplicity, but some still wanted options and detailed controls, so we determined which software had what and how easy it was to use. We checked that bundled utilities performed as advertised.
Web-usage statistics, along with Valve’s Steam gaming engine statistics, show Windows XP still being used between two to four times as much as Vista. And because Windows 7 will soon be pushing Vista out of the marketplace, we tested with WinXP SP3. Valve shows more than 50% of users have CPUs ranging between 2.3GHz and 3.3GHz, and 70% have 2GB or more of RAM; instead of using a low-powered test system (which artificially highlights speed differences in the products), we chose a representative 3GHz Core 2 Duo-based computer with 4GB of RAM and two SATA hard drives to show the real-world effects of installing security software. If you’ve skipped ahead to the charts, you’ve seen that the test system was never overwhelmed by any security suite, though there were definitely measurable differences in speed with many tests. For the record, we also used slower systems and virtual machines for some threat testing and network compatibility. (Note: All prices listed are for a 3-PC license.)
About Malware Detection Rates
Although we’re including the results of our malware-detection and healing tests (performed against real malware collected with our own honeypot and mail servers), it’s time to mention something about statistics and sample size. Outfits such as AV Comparatives (www.av-comparatives.org) have teams of technicians spending months running most of our tested products against a malware “zoo” consisting of 1.3 million malware samples. Having decidedly less resources, we selected 25 malware items and one infected thumb drive to test against. There’s no telling if our sample is a representative subsample of AV Comparatives’, or indeed, of the types of malware spreading about in the real world at any given time, so directly comparing our detection rates with AV Comparatives’, or anyone else’s (and there are others you can and should Google for), isn’t terribly meaningful.
AVG Technologies AVG Internet Security 8.5
● ● ●
AVG’s free antivirus program is among the most popular security products on the Internet, so you’ve probably seen it around. As such, AVG Internet Security feels very familiar, essentially adding a two-way firewall, spam filter, drive-by download and phishing shield, and antirootkit abilities to the traditional antivirus/antispyware engine. This is a model most of the security vendors have taken with their suites, but AVG’s interface feels more cluttered than most. AVGIS also feels familiar because it essentially follows the security model of yesterday—deluge the user with security questions all the time, but don’t always be clear about the best course of action. For example, when it detects malware in a download, a pop-up proclaims “Threat Detected” and identifies the infected file and the threat it contains, usually followed by a Close button. Nowhere does the dialog box actually say “threat deleted” or “don’t worry, your computer is safe.” On top of this, the dialog stays up indefinitely, requiring you to pause your work to click it. For some threats, you’re given the option of Heal, Move To Vault, or Ignore with a Remove Threat As Power User checkbox; it seems sensible until you realize that many other products would automatically move the threat to the vault and not bother you with the details. The firewall pops up similar dialogs about network access to most wellknown programs and Internet games, even going as far as to jump to the Desktop so you can click Allow, although launched games resume where they are paused. Many other firewalls “automagically” know about thousands of “known-good” programs and just let them work. Other noteworthy aspects include better- than-average spam filtering, the best 3DMark06 score (though they’re all within .2% of each other), a default setting to scan within compressed files, and the identification of a well-known email password- recovery program as a “potentially dangerous hacking tool.”
Avira Premium Security Suite
● ● ● ●
Avira distributes what is generally the second-most-popular free antimalware program, and like AVG, Avira Premium Security Suite feels a lot like its free cousin, but with more features added. Also like AVG, APSS tends to annoy its user with a lot more pop-ups than necessary, and they contain options likely to confuse. Upon detecting our infected USB flash drive, for example, it popped up a warning identifying the offending file and the infection but made the user select one of the following options: Move to Quarantine, Delete, Overwrite And Delete, Rename, Deny Access (default), and Ignore. If you’re a virus researcher, such options are nice to have, but in almost every other situation it should automatically move the malware into quarantine. Pop-ups are the standard operating mode for the firewall, which even managed to freeze Counter-Strike: Source in its tracks until we ALT-TABbed to the Desktop to view the firewall permission dialog box, clicked the Allow button, and ALT-TABbed back. And then we had to do it again for another component in CS: Source that wanted to get online. It also popped up warnings about the occasional ICMP packet being detected from the Internet—something no other security suite did. On the positive side, APSS tied Eset for the fastest PCMark05 score, and its Web scanner proxy actually sped up large downloads from our test server on the LAN. Its AV Comparatives detection rates were the best. Its interface offered the right combination of ease and access to technical details. We’d be more willing to overlook Avira’s (and AVG’s) issues if these products were free or inexpensive, but the competition has it beat here too, with some being half the price.
BitDefender Internet Security 2009
● ● ● ●
BitDefender Internet Security 2009 feels like the most flexible suite from the moment you fire up its installer, because it peppers you with question after question about your home network, parental and identity control, and so forth. Most other utilities make you dive into the interface to configure these options, or they just turn them all on by default and assume you’ll figure out how to disable them if you need to. Most of the utilities aced at least one of our performance or security tests, but not BitDefender, although this is forgivable given its low price. Its pop-ups are very straightforward and make it clear that it’s on the job and taking care of problems as it finds them. The firewall doesn’t seem to know much about good and bad applications, as it asked us about almost every Internet-accessing program we had, except for obvious programs such as Web browsers, emailers, and WinZip. One unique option is a removable disc scanner, which asks to run a scan whenever a new disc or flash drive is inserted—very handy in this era of infected thumbdrives. Most of the utilities have a game mode, which tells the software not to display any pop-ups that would interfere with fullscreen games, movies, etc. Some of the better utilities enter game mode automatically, but BitDefender requires you to enter game mode manually. Background updates sometimes require a reboot, which interrupted us more than once. BitDefender’s main interface has two modes, Simple and Advanced, and it’s a good way to minimize confusion for most users. Simple mode basically lets you enable or disable various areas of protection in a broad stroke, while the Advanced mode opens up all the options and fine details. We love all the tools available in Advanced mode, but Basic is a little too busy, considering the options you can’t select there.
Eset Smart Security 4
● ● ● ● ●
Eset’s security programs are known for being light on resources, and Eset Smart
Security doesn’t disappoint. It added the least amount of time to a reboot (just three extra seconds!) and tied for the best PCMark05 score. We were pleasantly surprised by its high level of “smarts.” (But then again, “smart” is in its name, so we shouldn’t have been.) Better than almost any other suite, ESS knew what to say and when to say it. When it detects a downloaded virus, for example, it pops up a small red alert dialog box, which identifies the infection, the infected file, and simply says “Connection Terminated—Quarantined.” Its game mode fires up automatically when it detects programs running fullscreen. The firewall immediately recognized almost every Internet program, remote-control applet, and online game in our arsenal and let them communicate with the Internet without prompting us, yet it was smart enough to just block our firewall-leaktest program. Although our leaktest program wasn’t really malicious (which is important when considering Norton’s actions), we think ESS made the smart call on this. The ESS interface has two modes, Standard and Advanced. Standard has the bare minimum of commands, but they are the right ones a beginner really needs. Advanced adds a few more options front and center but makes the Setup menu available with direct access to configuration options. Some options that are typical in other products are either slightly hard to locate or simply absent, forcing the user to rely on ESS to make the smart choice automatically. The only glitch we encountered was with our download speed test. Between two LAN machines, speeds slowed to a crawl (slower than DSL rates), yet we saw no slowdown on downloads from the Internet. Eset Smart Security’s smarts and speed make it the most expensive choice here, but if you don’t want to be bothered by your security suite, the cost is worth it.
Kaspersky Internet Security 2009
● ● ● ●
Kaspersky’s security products are generally thought of as the preferred tool for experts, and we can see why. It combines excellent detection rates with very clear on-screen messages, but makes no attempt to simplify the process of keeping your system secure. All the settings and configuration options are sort of hanging off the interface every which way (there’s no basic mode), and current protection statistics and live graphs and charts line every screen in its tabbed interface. If you like being asked about almost every program your security suite encounters, you’ll love KIS. For example, it identified our gaming keypad’s driver as “a potentially hazardous program,” asking if we wanted KIS to run it, delete it, or assign it to a restricted group. Run and Delete are obvious, but the Restricted group is something different. KIS can selectively prohibit apps from having access to the network, the file system, or the Registry, sort of like “sandboxing” them. None of the tested products identified clean-but-not-legal keygen applets as malware (years ago they used to), but KIS was the only one to offer to run them in a restricted mode, preventing them from doing anything untoward. Even the excellent spam filter is expert-oriented: It divides messages into “definitely spam” and “probably spam,” minimizing the messages you need to double-check once the system is trained. KIS is not without drawbacks. It generated the slowest CS: Source and 3DMark06 benchmarks, and, in fact, we had to disable it before 3DMark-06 and PCMark05 would even start. (We manually re-enabled it after starting the benchmark programs.) Its firewall was slow to react to a port scan, stealthing many ports only after a scan commenced.
McAfee Internet Security 2009
● ● ● ●
McAfee Internet Security is the surprise low-price leader among the major vendors, with a per-computer price of only $15. And although it did relatively poorly with our relatively small malware zoo, it has the second-highest detection rate in AV Comparatives’ more statistically significant test. It receives definition updates almost constantly and will even update itself to next year’s version automatically if your subscription is active when McAfee performs the switchover, making it an even better deal. MIS automatically enters game mode when fullscreen applications are running, suppressing the pop-ups that would kick you to the Desktop, but it doesn’t stop it from performing scheduled tasks or getting updates, which can slow things down occasionally. Many of the suites now duplicate McAfee’s Site Advisor, a pioneering service that shows you how malware-free a Web site is from the results of a search engine search, though we found it a tad more sensitive than the competition. It is easily disabled if you’re not with the “better safe than sorry” crowd and doesn’t take up a lot of browser space. MIS does a good job of clearly explaining what it’s doing. It quickly dispatches viruses with a clear “McAfee has automatically blocked and removed a Virus,” and the firewall messages are similarly clear, although we encountered them more than we would have expected with popular network applications. With virtually no training, the spam filter was right 99% of the time, obviously benefitting from McAfee’s server-side training based on all its users’ input. Our biggest problem with MIS was a general level of sluggishness. It took a good 4 seconds from Tray icon doubleclick to being able to work with the GUI, whereas a lot of other suites are instantaneous. Navigating to certain sub-screens takes a moment, too, discouraging experimentation.
Norton Internet Security 2009
● ● ● ● ●
In the recent past, Symantec was justly targeted by angry users for bloated versions of NIS that slowed computers down, sometimes dramatically. NIS 2009 is a whole new ballgame. The main NIS interface has two CPU bars—one showing overall CPU usage and another showing how much CPU time NIS is consuming, obviously attempting to prove that your slow computer isn’t Symantec’s fault. Other speedboosting tricks include never performing a background scan or downloading an update unless the CPU is idle, actively freeing RAM when the program is idle (its idle RAM footprint is an almost unbelievable 4.5MB), and taking inventory of known-good executables on your hard drive (and recording their checksums) and then skipping them during system scans to make scans faster. The main GUI appears instantly upon double- clicking its Tray icon, and subscreens open instantly, too. NIS has just one mode (no basic and advanced modes here). Instead, the relatively simple GUI has multiple Settings links that delve deeper into more options. It takes up too much on-screen space but works well. You may not need to get to detailed configuration settings often because NIS is just about as smart as Eset, almost always making the right choices about what to block (and telling you so unambiguously), what to quarantine, and what to leave alone. It let our leaktest program open ports unopposed, but this, debatably, isn’t a dangerous program per se, and NIS’ heuristics accurately detected this. Although NIS 2009 is a spry application, it’s worth noting that benchmarks were generally average, and the antispam filter needed a lot of training before it approached the effectiveness of the competition’s untrained filters. Still, NIS is an excellent combination of price, speed, and features and worth a second look if you’ve been burned by Symantec before.
Panda Internet Security 2009
● ● ● ●
Panda Internet Security is a very attractive, easy-to-use security program that just needs slightly better pricing, a little more smarts when dealing with nasties (or, in our case, a false positive), and a bit of a diet. We have only slight qualms with PIS’ detection model. When we tried downloading test malware, Panda’s concise message of “This file was infected with this virus and was deleted” appears directly in the content area of the Web browser window and clearly says what it does, which is great. Infected compressed files, on the other hand, generated no message and actually downloaded and saved, but the ZIP files themselves were empty. PIS silently took care of the problem. Viruses in ZIP files detected with heuristics were renamed with a .VIR extension, which is important to note since our legitimate passworddetecting program was renamed in its ZIP file. When we extracted it and renamed it back to an EXE file, it worked fine. A manual scan of it resulted in its being quarantined, meaning the background scanner plays by different rules than the on-demand scanner. PIS isn’t especially well suited to gamers. There’s no game mode (it started downloading an update during a CS: Source benchmark; we threw out that test result), and it consumes a whopping 158MB of RAM when idle. The firewall didn’t recognize some popular Internet applications and games that other security suites simply allowed without a pop-up. That said, it makes a good security suite for the general populace. The clear interface invites exploration, and it comes with the most well-written Help file. The spam filter’s only mistake was marking a few newsletters that had imbedded ads as spam before training, and PIS’ rescue CD (like Norton’s) makes recovering a thoroughly infested Windows installation possible.
Trend Micro Internet Security Pro
● ● ● ●
We haven’t looked at a Trend Micro security product for a while and are pleasantly surprised at the innovative features tucked into the current version of TMISP. However, a general slowness in opening the interface, along with a fairly dramatic increase in most filerelated benchmarks, has us hoping the engineers at Trend can give TMISP a NIS2009-like speed boost in the future. Additionally, its lack of inclusion in the AV Comparatives’ (and other large-sample) tests has us wondering about its overall efficacy against malware, though it aced our limited tests. TMISP clearly announces when it blocks malware and confirms your system is safe, so there are no decisions you need to make to stay malware-free. It also wisely decides which applications to automatically grant network access to and which to block, though manually overriding the built-in smarts is simple. All the products in this roundup come with some sort of Web filter or phishing filter, but TMISP’s Web site safety filter actively blocked our malware test server on our test machine after only about eight virus detections. Our other test machines were blocked from their first visit to our malware test server only a few days later. You can’t get infected from a site you can’t connect to, right? An additional button on the browser toolbar evaluates the security of your wireless connection, handy in coffee shops and other hotspots. Although it lacks either an automatic or manual game mode, some interesting features include a keystroke encrypter to foil keyloggers, a remote file vault to back up important files, and an Internet filter that monitors and optionally prevents the transmission of information such as credit card numbers, telephone numbers, and so forth.
Each of the suites has its strengths and weaknesses, but we’re pleased to report that none of the suites we tested will slow down a reasonably modern computer. For those seeking a lightweight suite that doesn’t deluge you with questions and pop-ups, we recommend Eset Smart Security and Norton Internet Security, depending on whether you want the utmost speed in benchmarks or merely very good speed with more security features, respectively. Control freaks and techies who like lots of options should consider Kaspersky Internet Security.
Source of Information : CPU Magazine 07 2009
Sunday, August 16, 2009 | 0 Comments
It’s been a long time since Adobe Acrobat was the only way to generate PDF files, but the alternatives have normally had trade-offs. Some are merely cheaper than Adobe’s (admittedly high-priced) offerings but still aren’t free. Some free options are fairly complicated multistep processes, involving making a PostScript file and then converting it with GhostScript to a PDF. PDFCreator strikes a nice balance between cost (it’s free), ease of use, ease of installation, and configurability. Once you get past its only installation gotcha (be sure to uncheck the browser toolbar add-on if you normally avoid such things), installation is a snap. The installer adds GhostScript (an open-source Post-Script interpreter), a Windows printer driver, and a print job manager quickly and painlessly, and it even includes a wellwritten and complete Help file. Like other tools, making the PDF involves “printing” to the PDFCreator printer driver and specifying a filename for the resulting PDF file, and then within a few seconds, the PDF file appears in your default PDF viewer. PDFCreator has a few nice tricks. There are individual settings for controlling the compression rates of different types of graphics, controls for embedding True Type fonts within your PDF files, and different pathways for directly emailing PDF files once they’re created. If you’re trying to add PDF creation to some sort of workflow, you can automatically execute scripts before or after the actual PDF file is created. There’s even an option to create a network print driver, letting all the computers in a LAN create PDFs files without installing the program on multiple PCs. PDFCreator is one of those projects that’s been improving for years yet still refuses to breach the magic “1.0 Barrier,” so we encountered what is basically a finished and polished product, free from the bugs normally associated with beta software.
Publisher and URL: Philip Chinery and Frank Heindörfer, sourceforge.net/projects/pdfcreator
ETA: Q4 2009
Why You Should Care: There’s no better free PDF creator for Windows with this many options.
Source of Information : CPU Magazine 07 2009
Friday, August 14, 2009 | 0 Comments
If you work in a small network spread across a small or medium-sized office, perhaps you’ve wished for some way to easily communicate with the group that’s faster than email but isn’t public like IM programs. The old WinPopup method of broadcasting short messages to everyone in the LAN is still an option, but a better one is Network Assistant, which has more features than a Swiss-army knife and is easy to use and configure. Network Assistant doesn’t require a centralized server to coordinate things: It just multicasts over IP or with UDP packets over your LAN, finding other instances of NA automatically. By default, NA users are identified by their Windows username, but custom names are available, too. Users on the LAN appear in a list, and by rightclicking a user, you can send a pop-up message, initiate a private chat, send a file, or even send a “beep” over his speaker. Other communications options include an IRC-style group chat (complete with several channels), a shared whiteboard, and a more permanent message board. There are some handy administration tools, too. Users can be divided into groups, making it easy to handle offices with dozens (or hundreds) of users. Operators can share screenshots of their desktops and make their Windows task-list visible, allowing for a basic kind of remote support option for an IS department. Certain features can be locked out with an administrator password, thereby forcing users to utilize certain options. The pricing for Network Assistant boils down to $30 a seat, and you can only buy licenses in groups of two or more. (There are discounts for bulk licenses.) If you’re just looking for a cute way to chat with your wife in another part of the house, this is probably a tad expensive, but this is actually a good price for small-office, groupware-type software. The 30-day trial should be enough time to figure out if the expense is worth it.
Network Assistant 4.5
Publisher and URL: Gracebyte Software, www.gracebyte.com
ETA: Q3 2009
Why You Should Care: Add big-network communications to any small LAN with ease.
Source of Information : CPU Magazine 07 2009
Thursday, August 13, 2009 | 0 Comments
Are you already running Windows 7 beta? There’s a lot to love in the OS, and unlike the usual service pack update, we actually get a whole new batch of things that can be tweaked and tuned. We’re only just getting our feet wet with Windows 7, but already we have a Top 5 items list for you to improve in the new operating system:
1. For the first time in over a decade, you can now uninstall Internet Explorer. The core IE rendering files used by other apps will remain, but the application itself can be snuffed. The quickest way to do this is to go to the Start menu’s Search bar and type turn windows features on or off. This will bring up the Windows Features box. Scroll down to Internet Explorer 8, uncheck it, and click OK.
2. If you do want to run Internet Explorer 8, you may find that new browser tabs exhibit problems, especially with slow load times. So far, this seems to be the result of an upgrade process glitch. While running with administrator rights, go to a command prompt (type cmd at the Search bar) and type regsvr32 actxprxy.dll. Reboot, and the problem should be fixed.
3. Few, if any, features in Windows Vista were more maligned than UAC pop-ups, which ask if you’re really sure you want to do the thing you just went through several clicks to do. Rejoice, friends. Type UAC at the Search bar to bring up the UAC settings window. The vertical slider has four settings. We recommend the third, just above Never Notify. This will still give you a permissions prompt in case of a really odd request, but otherwise, Windows will leave you alone.
4. Windows 7 changes the Start Menu’s power button from Hibernate to the more common sense Shut Down function. However, this can be changed to suit your taste. Right-click the Taskbar and select Properties. In the Start Menu tab, use the pull-down menu for Power button action and take your pick of power functions. While you’re there, click the Customize button and look for ways to fine-tune Start menu operations. For example, you can speed up searches by changing the Search Other Files And Libraries item to Search Without Public Folders. If you do a lot of video viewing, perhaps displaying videos (look under Videos) as a link or a menu off of the Start menu will save more time than going through Computer.
5. For some people, every fraction of a second counts. You may have noticed in Vista that when you mouseover a Taskbar application, its preview thumbnail waits 0.4 second before appearing. In Windows 7, you can change this delay period. Go into the Registry Editor (type regedit in the Search bar) and navigate into this folder: HKEY_CURRENT_USER\Control Panel\Mouse. Right-click MouseHoverTime and pick Modify. The default is 400 milliseconds. If you crank this down to 100 or even 50, over the course of a year, you will have saved enough time to have at least two extra thoughts while staring at your screen. Make ’em count.
Source of Information : CPU Magazine 07 2009
Wednesday, August 12, 2009 | 0 Comments
Turn A Phenom II X3 Into An X4
Back on Feb. 9, AMD launched a handful of AM3 Phenom II processors built using the firm’s latest 45nm manufacturing process. We came away from our initial testing satisfied that AMD had a real winner on its hands in both the Phenom II X3 and X4 processors. The new CPUs launched at higher initial clocks than their predecessors and also exhibited significantly more overclocking headroom. Add the multiplier-unlocked Black Edition models and prices that put the squeeze on some of Intel’s midrange offerings, and you have all the makings for a successful product. But two weeks later, the Korean hardware site Playwares (www.playwares.com) discovered a somewhat obscure BIOS setting available on some AMD chipset motherboards that, when set to Auto, would turn select tri-core Phenom II processors into fully operational quad-cores. As the news filtered through the enthusiast community, others began reporting success with certain motherboards and processors while sales of AMD’s Phenom II processors started to take off. DigiTimes (www.digitimes.com) reported high demand for AMD’s new Phenoms, and motherboard makers claimed AMD could earn up to a 30% share of the global desktop CPU market in Q2 (up from a previous 20%). There’s no data to show a direct correlation, but the exploit couldn’t have hurt the Phenom II’s popularity. We contacted AMD to get its take. Product Manager Damon Muzny responds,“We’re quite excited by the attention and interest folks are showing in the new 45nm Phenom II processors, especially for our Black Edition X3s and X4s.” And AMD should be proud of its Phenom IIs. But if there’s a chance of getting a quad-core Phenom II for $145 or $125, well, that’s just icing on the cake. Read on as we cut through the speculation and attempt to unlock a Phenom II X3 and show you what it takes to do it yourself.
Quad-Core Caveat Emptor
This is the part of the show where we tell you that although it’s possible for you to replicate our successes (more on those later), it’s also more than likely that you’ll replicate our failures (more on this, too). Every three-core processor has four cores, but the disabled core presumably didn’t pass validation testing and was disabled for that reason. Even if you manage to unlock this core, it may be unstable, negatively impact your system performance, and could cause your system to become unbootable. Should you be one of the lucky few who manages to get a fully functioning quad-core CPU from your Phenom II X3, it will draw significantly more than its rated 95W (expect closer to 125W under full load). Enabling the fourth core would likely also void your warranty and has the potential to shorten the life of your hardware. Having said that, let’s dig in.
The first thing you need is a Phenom II X3 processor, of which there are currently two: the 2.6GHz Phenom II X3 710 and the 2.8GHz Phenom II X3 720 Black Edition. We had a 720 in house that AMD sent us for the initial launch of the series, but we asked the firm to send us a 710, as well. But not just any old Phenom II X3 will do. According to the information we’ve been able to uncover, only 720 and 710 processors from batches manufactured on certain dates have any success with the exploit. Playwares achieved its results using processors manufactured in the fourth week of 2009, while other sources claimed varying degrees of success with processors dated in the 46th and 51st weeks of 2008 and the fourth and sixth weeks of 2009. To determine if a given Phenom II X3 processor has a good chance of working with this exploit, you’ll need to look at the three rows of alphanumeric characters that make up the part number etched into the CPU’s heatspreader. Pay particular attention to the far right block of four numbers followed by four letters in the middle row. For our Phenom II X3 720, the number is 0849CPMW, while the 710’s number is 0906MPMW. The numbers refer to the week the die was manufactured. For instance, our 720 was manufactured in the 49th week of 2008, and the 710 was manufactured in the sixth week of 2009. Despite the fact that only the 710’s date batch had any reports of success, we resolved to try both processors. Regarding a motherboard, you will need one with the AMD 790FX or GX chipset, particularly one that features the SB750 Southbridge, and a BIOS that supports ACC (Advanced Clock Calibration), the feature that enables the system to recognize the fourth core of an X3 processor. At press time, a German site (www.hwbox.gr) also reported success using the Nvidia chipset-based Gigabyte GA-M720-US3 paired with a beta BIOS. Because the first site to report the exploit, Playwares, used the Biostar TA790GX 128M, we decided to use that motherboard as the platform for our testing.
Try, Try Again
Our system consists of the Phenom II X3 710 and 720, a Cooler Master V8 CPU cooler, Biostar TA790GX 128M (AMD 790GX + SB750) motherboard, 2GB Corsair TWIN2X2048-6400C4 (2x 1GB, DDR2-800) SDRAM, ATI Radeon HD 4890 graphics card, PC Power & Cooling Silencer 500 EPS12V power supply, and a 1TB Western Digital Caviar Black WD1001FALS hard drive. The Biostar motherboard’s BIOS is version 2.61, and Windows Vista Ultimate is our OS. We started with the higher-performing processor first, the 2.8GHz Phenom II 720 Black Edition. We installed it into the system, entered the BIOS Setup Utility, navigated to the Advanced tab, selected CPU Configuration, scrolled to the bottom of the page to highlight Advanced Clock Calibration, and set it to Auto. After saving the changes and restarting, the PC wouldn’t budge from a black screen, failing to boot or even begin the POST. We pressed CTRL-ALT-DELETE and tried to reboot, and again, the system failed to even initialize the display. This unsurprising result is likely to happen when you turn on ACC with most Phenom II X3s that aren’t from one of the magic batches. According to a product manager from a prominent motherboard manufacturer, “The original function of Advanced Clock Calibration is to sync the different speeds of each core of a multicore processor”— to help when overclocking the original Phenom processors. Prior to the launch of the Phenom IIs, and before the tech press discovered ACC’s special new ability, an AMD product manager had this to say about ACC: “Things learned through developing ACC with the 65nm Phenom were baked into our new 45nm Phenom II silicon. . . . You can just as well leave ACC off for Phenom II [overclocking] testing.” Our unnamed source tells us his theory as to why ACC unlocks the fourth core of Phenom II X3 processors. “These X3 cores are actually X4 cores, and . . . the ones that fail in certain cores, instead of throwing them away, [the chip maker] just disables [the core] with a register that they add in, and in certain date codes, [AMD’s] manufacturing plant failed to add that register.” Then presumably, stability concerns notwithstanding, this exploit should work on every processor from those certain date batches? “Yes. Actually, we’ve known about this for some time.” When pressed for when the ACC exploit first came to his attention, our contact explains, “I seem to remember seeing something about this in . . . November or December.” Armed with renewed confidence and one more Phenom II X3 processor to test, we cleared the CMOS, removed our stubborn Phenom II X3 720 Black Edition, installed the Phenom II X3 710, switched ACC to Auto, crossed our fingers, and restarted. Almost immediately, we were greeted with a positive sign, as the POST displayed our CPU as an “AMD Phenom II X4 10 Processor.” In Windows, a popup informed us that device driver software installed successfully for an “AMD Phenom II X4 10 Processor.” CPU-Z also confirmed that although we were running a Deneb-based Phenom II X3 710, it was equipped with four cores and capable of handling four threads. To determine if a heavy load would trip up our new quad-core, we ran Prime95 on all four cores for an extended period, and it breezed through with flying colors. We ran our suite of processor-stressing benchmarks, and the system remained stable throughout. Better than stable: In benchmarks that scale well between multiple cores, our unlocked Phenom II X3 710 was performing in line with what we’d expect from a quad-core Phenom II. Check out the “Unlocked & Overclocked” chart to see the numbers. Having proven that our Phenom II X3 710 can be unlocked to take advantage of its dormant fourth core, we decided to push our luck a bit and overclock it. Although we wanted to get as much performance out of the chip as possible, we were also wary of pushing the thermal envelope too much. We managed to get stable performance at 3.18GHz by tuning the CPU HyperTransport clock to 245MHz, increasing the northbridge frequency to 2,000MHz and giving the CPU an additional 0.125V to work with. As a result, our unlocked and overclocked Phenom II X3 710 outperformed AMD’s Phenom II X4 940 Black Edition (see page 43 in CPU’s March 2009 issue), a processor that—as we went to press—was selling for close to $100 more than the Phenom II X3 710.
Four Is Better Than Three
As hacks, mods, and exploits go, unlocking the fourth core on a Phenom II X3 processor is about as easy as it gets. Unfortunately, the hard part is just getting your hands on a processor from the batch missing the fourth core disable register. Unless you can find an online retailer that will reveal the batch numbers prior to purchase, there’s no way to know if you’re getting an X3 from an exploitable batch. On the other hand, our winning 710 is one of the more recent X3s to show success, so there’s a possibility that any X3 that has a stable fourth core will work.
Source of Information : CPU Magazine 07 2009
Tuesday, August 11, 2009 | 0 Comments
So, this is kinda interesting. AMD just introduced the Phenom II X4 955, and at its price point, it’s very competitive. The chip retails for $245 and, as it turns out, is faster than any Core 2 Quad that’s similar in price. Even AMD’s Phenom II X3 720 has the performance-per-dollar advantage at its price point. Where AMD can’t compete is at Nehalem’s price points, anything priced at $284 or higher. AMD’s dominance doesn’t extend all the way to the bottom of the price ladder, either. In addition to the Phenom II X4 955, AMD just released the Athlon X2 7850, priced at $69. A very affordable CPU, yes, but it doesn’t compare well with its closest competitor, Intel’s Pentium E5300, in terms of price/performance. Why is it that AMD is competitive in one price band but not another? Or, looked at from the opposite perspective, how can Intel dominate performance at one end but not at another? As you can guess, it boils down to having a multitude of architectures in the market at the same time.
Intel’s high-end processors are truly next-gen where performance is concerned. The Core i7 starts at $284 and goes all the way up to $999, and AMD simply can’t outperform those chips. In some situations it gets close, but overall, Core i7 offers a 0 to 40% performance advantage over the best of the rest. Unfortunately, the Core i7’s underlying Nehalem architecture won’t make its way into mainstream parts until the last quarter of this year. It’s unclear what Intel will call the mainstream version, but most are speculating that it’ll carry the Core i5 name. The i5 should be able to compete quite well with AMD’s Phenom II, but given that it won’t be out until sometime around October, that leaves Intel’s Core 2-based CPUs to compete with Phenom II. Clockfor-clock, Intel has the advantage, however, AMD is very aggressive on its pricing, and Intel fully intends to keep turning a profit even in poor economic times. The end result is what happened with the Phenom II X4 955; AMD’s 3.2GHz offering competes with Intel’s 2.83GHz Core 2 Quad Q9550. It keeps the marketplace competitive for the consumer, but AMD turned in a hefty loss on its earnings last quarter, so it’s not an approach the company can keep up for long. Move down in price again, and this time AMD’s architecture is the one that changes. Phenom II CPUs are built using AMD’s 45nm process on a die that’s nearly as large as Nehalem’s; there’s simply no way they could be sold for under $100 at this point. Instead, AMD is rebadging last year’s Phenom processors as Athlon X2s.
These things are 65nm quad-core chips with two cores disabled. The dies are too big to be sold for under $100, but after Phenom II, no one really wants an original, so AMD has no other option than to rebrand them as dual-core Athlon processors. Intel doesn’t switch architectures as you drop down to the $70 price point. The Pentium processor is a Core 2 derivative, albeit with less cache. At around $70, you’ve got the Athlon X2 7850 and Intel’s Pentium E5300. In pretty much all application benchmarks, Intel takes the win there. The notable exception is gaming performance, where AMD is the winner. Intel has the lower power consumption, as you’re getting a small 45nm die instead of a large used-tobe-a-quad-core 65nm die. I’d say that Intel is the victor at $70, but that all depends on whether you’re building a gaming machine. Over the next six to 12 months, we’ll see both manufacturers try to transition all of their CPUs to the same architecture, which may make things more clear-cut. Until then, that’s why the world performs the way it does.
Source of Information : CPU Magazine 07 2009
Sunday, August 09, 2009 | 0 Comments
If you just got a brand-new Phenom II X4 940 BE and are kicking yourself for not waiting for the 955 BE, you can stop. The 955 is pretty much the same story as the 940, but with a slightly faster clock (3.2GHz vs. 3GHz, respectively). AMD pulled the same stunt with the Phenom X4 9850 BE and 9950 BE. It’s not that the 955 is unimpressive, it’s just that it’s not any more impressive than its predecessor, the 940. The slight bump in benchmark performance is easily explainable with the slightly higher clock of the 955. If you’re overclocking, the bump is a bit more noticeable. I hit P9642 in 3DMark Vantage with a clock speed of 3.86GHz—a significant step up from the stockspeed score of P6488. The bottom line is that the Phenom II X4 955 Black Edition is a slightly better chip than the 940 and enters the market at a better price—$30 less, to be exact. We can only assume that price will drop with the next batch of AMD CPUs, so it’s a great deal for AMD’s best processor.
Source of Information : CPU Magazine 07 2009
Saturday, August 08, 2009 | 0 Comments