Windows was built over the foundation of the DOS disk operating system that shipped with the very first PCs from IBM in the early 1980s. This brought with it several problems, many of which still exist today. The biggest issue is the need to maintain compatibility with legacy hardware and software. DOS did not support or need to support multiple users or multitasking. Support for these has been added with later versions of Windows.
As computers have changed in the last 30 years, and with the introduction of new technologies such as the Internet, the need for extra security has come to the forefront of operating system design. Unfortunately, this has meant having to build security over the top of the existing Windows system. This has inevitably led to some compromises and security flaws, which have been exploited by the authors of malicious software. With Windows 8, it is rumored that the legacy support will be moved into a virtual machine. This means that the security subsystem in Windows 8 will be able to be treated differently, making it much more secure.
Most other desktop and server operating systems, including Linux, Apple OS X, and Google Chrome OS, are all built on top of an operating system called UNIX. This operating system was developed in 1969 and was initially designed to accommodate multitasking and multiusers on mainframe computers.
This means that user permissions and overall operating system security have always been handled differently in UNIX, with users never being given default administrator access to the operating system files. UNIX has slowly made its way from mainframe and minicomputers over the years to the desktop market, during which time this security system has remained unchanged.
None of this means that Windows 7 is an insecure and unstable operating system, quite the opposite. It is the most secure and stable operating system that Microsoft has ever released, and many experts believe it to be every bit as secure as a UNIX-based operating system. It is the vast popularity of Windows that has made it such a security target in the past.
Source of Information : Microsoft Press - Troubleshooting Windows 7 Inside Out
Friday, December 31, 2010 | 0 Comments
Windows runs from a series of files and folders on your computer’s hard disk. The basic folder structure is extremely logical and has been simplified immensely over the years. There are three basic Windows 7 folders with some extra folders for user and configuration data and temporary files.
• Program Files. This is where all the files for any programs and software you install in Windows 7 sit. There are two Program Files folders in the 64-bit version of Windows 7: Program Files x86 for 32-bit software and Program Files for newer 64-bit software. Each program sits in its own custom folder under one of these folders. In the 32-bit version, there is only a Program Files folder.
• Users. This is where, by default, all of your documents and files sit, and it is also the location of the Windows registry, the database of settings for Windows and your software. Within the main Users folder, there is one subfolder for each user and another folder called Public, where shared files and folders are kept. There are also hidden user folders called Default and All Users.
• Windows. This is the main folder into which the operating system is installed.
Windows also installs hidden system files across the disk, including the Root folder.
These hidden files and folders are where Windows stores operating system recovery software and folders to support legacy software, including Documents and Settings and the Autoexec.bat and Config.sys files that date back to the earliest versions of DOS.
Inside the main Windows folder are a great many different folders, some of which exist to maintain compatibility with legacy hardware and software and some of which service specific features within the operating system.
All of these files and folders are essential, and you should not move, rename, or delete any of them. Folders you might find of particular interest include the following.
• Globalization. This is where you will find the desktop wallpapers in Windows.
• Resources. This is a similar folder to Globalization but is for Windows desktop themes.
• System32. The main operating system files, including hardware device drivers, are located in this folder.
Wednesday, December 29, 2010 | 0 Comments
Windows 7 is the latest edition in a series of desktop operating systems and graphical user interfaces (GUI) from Microsoft. Windows 1.0 was released in 1985 as a GUI that sat on top of Microsoft’s popular DOS disk operating system. Over the years Windows has been changed and refined, finally subsuming DOS and becoming a full operating system in its own right with the launch of Windows 98.
Windows 7 was released in October 2009. It is not exactly the seventh version of Windows. Rather it is the seventh version from its particular branch of the software. There have been two branches of Windows: the original consumer versions and the New Technology (NT) business versions. The original consumer lineup included the popular Windows 3.1, Windows 95, and Windows 98. It ended with Windows Me. The NT series began in 1993 as an offshoot of Windows 3.1, with much of the underlying code reengineered to make it more stable and suitable for business users. This NT development tree has split off further with the Server, Desktop, and Home Server variations of the operating system. In turn, the NT branch spawned various server versions of the operating system and then Windows XP, Windows Vista, and most recently, Windows 7.
There is some debate about whether Windows 7 really is the seventh iteration of the NT family, but it’s not the most important concern facing the world today. Windows 7 is officially the seventh iteration if you follow the tree Windows 1.0, Windows 2.0, Windows NT 3.1, Windows NT 4.0, Windows XP, Windows Vista, and Windows 7. Depending on your view, there have been as many as 28 versions of Windows since it first launched, up to 2010. Windows in its various versions is currently used by approximately four billion people worldwide.
Source of Information : Microsoft Press - Troubleshooting Windows 7 Inside Out
Monday, December 27, 2010 | 0 Comments
During the cold war spies were used to infiltrate governments, the military, businesses and other organisations. Their job was to steal information (both non-classified and classified) that might prove valuable to another nation-state. There were some people who did this for individual financial gain, but in the main it was governments who wanted to learn about some new technology or secret weapon to find a way of developing it themselves.
This is still going on today but has evolved into more than just cyber spying – there is also something called social engineering. This is where one individual attempts to trick someone else (through manipulation) into letting them inside a network for example to crack the system (rather than attempting to hack in from the outside).
Social engineering is often misunderstood and not considered as part of corporate and government security policies. It is without doubt one of the biggest risks to a nation-states and business security.
Think about two-factor authentication in IT security – the same principles can be applied to individuals but the real advantage is that individuals can be convinced into sharing authentication details – it also will take a lot less time to extract. Social engineers would be well versed in how to extract sensitive information from individuals (people traits and behaviour patterns are good starting points). Social engineers (often referred to as security crackers) use the telephone system to learn company or corporate lingo (and they will search the Internet for additional company or corporate data to assist their knowledgebase) and weave their way in to the IT security department. Once in the security department a security cracker could impersonate someone from that department and ask for the remote login credentials. It has been done.
Why not Google Kevin Mitnick?
He’s one of the world’s leading social engineering wizards and has managed to crack many a system just using social engineering techniques. Individuals are the weakest link in the cyber security strategy but with good education and motivation it is possible to reduce the risk of this attack vector.
Source of Information : Hakin9 November 2010
Saturday, December 25, 2010 | 0 Comments
For Nvidia, 2010 was the year of Fermi, the GPU architecture found on the GeForce GTX 480, 470, 465, 460, and 580 graphics cards. Earlier, AMD launched the Radeon HD 5670, 5570, and 5450, which were designed to appeal to the budget gamer and HDTV crowd. We also saw the Radeon HD 5870 Eyefinity Edition, which allows for sixmonitor gaming setups. Recently, AMD released the 6870 and 6850 to compete at the midrange level.
Winner: Nvidia GeForce GTX 460 (1GB)
The GeForce GTX 460 was a top midrange performer in 2010, and it helped show what Nvidia’s Fermi architecture is capable of. The GTX 460 is built using the GF104 GPU, which is an update from the GTX 480 and 470’s GF100 GPU. As a midrange card, it features two GPCs (Graphics Processing Clusters)—whereas the GTX 480 had four—but compared to the GF100 GPU, Nvidia has improved the GPCs by adding an additional 16 CUDA cores and twice the number of special function units and texture units. The result is an affordable graphics card that still provides you the ability to play today’s newest games at high frame rates.
Nvidia offers two versions of the GTX 460, one with 768MB of GDDR5 memory and one with 1,024MB. The 1GB version offers 32 ROPs and a 256-bit memory controller, while the 768MB version features 24 ROPs and a 192-bit memory controller. Core (675MHz), shader (1,350MHz), and memory (1,800MHz) frequencies are identical on both versions. There are 336 stream processors and 56 texture units to improve your computer’s ability to process large amounts of parallel tasks.
For outputs, the GTX 460 offers two dual-link DVI ports and a mini HDMI port. Those with limited space in their case will also like that the GeForce GTX 460 is only 8.25 inches long. It requires two 6-pin PCI-E power connectors, and Nvidia suggests that your power supply should be 450 watts or greater.
First Runner-Up: AMD Radeon HD 6850
In late October, AMD released the Radeon HD 6800 series, and the Radeon HD 6850 was a top performer in terms of price/performance. The Barts GPU found in the Radeon HD 6850 is a redesigned version of the Cypress GPU found in the 5800 series. The Barts GPU took 25% of the engines for compute, shader, and texture performance and reassigned them to handle rasterization, tessellation, and ROP. The changes meant that even though the Radeon HD 6850 offers a half a billion fewer transistors than the 5850, it can deliver gaming performance close to what you experience with the 5850, and you’ll pay $50 to $70 less for the 6850.
The Radeon HD 6850 features a core clock of 775MHz and 1GB of GDDR5 memory that runs at 1,000MHz. There are 960 stream processors, 48 texture units, and 32 color ROP units. AMD also improved the technology for video outputs. There’s a DisplayPort 1.2 connector that allows a max resolution of 2,560 x 1,600 per display, and the HDMI 1.4a port allows for stereoscopic 3D and high bit-rate audio. There are also two dual-link DVI outputs. The Radeon HD 6850’s integrated audio controller can provide 7.1-channel surround sound over either the HDMI or DisplayPort connections. Finally, the card features support for DirectX 11, Shader Model 5.0, OpenGL 1.4, and AMD’s Eyefinity.
Second Runner-Up: Nvidia GeForce GTX 580
The GeForce GTX 580 is the newest iteration of Fermi, and it attempts to make amends for the heat and noise problems found in the GTX 470 and 480. The GeForce GTX 580 also captured the single GPU performance crown over the GTX 480, because it offers more stream processors, faster clock speeds, and more texture units.
Overall, clock speeds are around 10% faster than the GTX 480. Even better, Nvidia claims that the GTX 580 offers lower power consumption than the GTX 480, so you’ll battle fewer heat issues. In terms of specs, the GeForce GTX offers 512 CUDA Cores, 1.5GB of GDDR5, and a 384-bit memory controller. It supports DirectX 11, OpenGL 4.1, and Shader Model 5.0.
2010 Motherboards You Care The Most About
Source of Information : Computer Power User (CPU) January 2011
Friday, December 24, 2010 | 0 Comments
China as previously discussed, has the potential to wreak havoc so it’s no surprise to understand that is has developed a comprehensive cyber espionage programmes (which targets for example computer hardware and software); created citizen hacker groups; established cyber warfare units (very much like many other nation-states) and embedded logic bombs and trap doors in many nation-state infrastructure networks and computer software. Chinese warfare strategy is very much politically driven.
China has developed a detailed cyber warfare strategy which works very closely with private hacker groups. To date there are probably 2-300 hacker groups working directly with the Chinese government. Take into account that they now have the Microsoft source code; they can now fully understand the security vulnerabilities long before they are identified and fixed by Microsoft. The Chinese government do not use Microsoft software for their networks – rather they use open source software called Kylin. The reason for this is clear – they plan to use their knowledge of Microsoft to inflict sabotage and or exploit as yet unidentified software vulnerabilities on those nation-states that use Windows operating systems.
Russia however, still remains the biggest threat in cyber space according to leading US security researchers and the US government. After all this is the land of the chess masters. In January of 2009 the world witnessed the third successful cyber attack against a country (all cyber attacks by this time had been committed by Russia). The target was the small country of Kyrgyzstan. The country is only about 77,000 square miles in size with a population of just over 5 million. The attackers focused on the three of the four Internet service providers. They launched a distributed denial of service attack traffic and quickly overwhelmed the three and disrupting all Internet communications.
The IP traffic was traced back to Russian-based servers primarily known for cyber crime activity. Multiple sources have blamed the cyber attack on the Russian cyber militia and/or the Russian Business Network (RBN). RBN is thought to control the world’s largest botnet with between 150 and 180 million nodes. In this particular cyber attack it is believed that the Russian government wanted to put itself an arm’s length away from the hostile act.
Did you know? The Russian Business Network (RBN) is a cybercrime organization specializing in and in some cases monopolizing personal identity theft for resale. It is the originator of the MPack exploit kit and alleged operator of the Storm botnet. (Reference:Wikipedia/edited)
Source of Information : Hakin9 November 2010
Thursday, December 23, 2010 | 0 Comments
Cyber warriors will attempt to sabotage the backbone of the Web which includes attacking the BGP. The BGP according to leading security experts is one of the most vulnerable access points on the web. The BGP is a core routing protocol which maps routing options for the best (i.e. shortest path) available routes for traffic to flow across the Internet. There have been two instances in 2010 where bad routing information sourced from China has disrupted the Internet. About 10 per cent of the Internet was affected by bad routing tables – in effect about 36,000 global networks were affected. This BGP routing error caused dropped connections, and most worryingly of all, Internet traffic to be re-routed through China.
Software and technology outsourcing
Many Western countries have over the years outsourced their IT and technology overseas, mainly to cut development costs. This has inadvertently led to some security researchers to speculate that there is a significant risk of Western businesses selling compromised technology/software back to governments and customers alike. The western military for example doesn’t have restrictions concerning where computer chips are made so it’s conceivable that malicious code such as logic bombs and trap doors may well be embedded in the millions of lines of outsourced computer code.
Microsoft software development doesn’t just happen in the US; it is in fact developed all over the world and on many different development servers. The US Department of Defense uses Microsoft Windows so you can identify the potential opportunities the cyber criminals and cyber warriors will see. The obvious risks of having the Windows source code distributed all over the world leaves the code open to trap doors and other malicious activity. It’s very difficult to control and manage millions of lines of code – if the code couldn’t be exploited, why do Microsoft release monthly security patches? The answer here (and I’m sure the US government agrees) is to keep the Windows source code in the US domain and under total US control.
Cyber weapons – what would be considered an act of war?
Most cyber weapons are only going to be designed to be used once. If these weapons are used more than once, then the cyber defenders will be able to detect them and apply the appropriate research to be able to defend against the same family of cyber weapons. Some nation-states (mentioned earlier) have the capability to strike at other nation-states to launch sophisticated cyber attacks to DDOS the stock market, activate logic bombs to ground the airlines and disable the transport and electricity grid.
If you were to take the US, the fact they are probably the most digitally connected country in the world (militarily at least), the prospect of the US provoking a cyber war wouldn’t be a clever plan. The other big problem facing a cyber war is who ever goes first will undoubtedly stand a better chance of winning. China for example could strike the US with an all out cyber attack and then disconnect itself from the rest of cyberspace.
So, what constitutes an act of war? It’s difficult to determine because there are so many attack vectors (which are common today and happening right now) that haven’t provoked a cyber war. Is it the penetration of a network? Is it sabotage of a network? Is it when a military network has had classified government documents stolen? What are the stages for cyber war? Let’s assume the malicious code has been planted and propagates across the network – the code isn’t activated yet, but when it is – is this an act of war? Who decides whether this is an act of war? There are lots of questions and not many answers right now.
Source of Information : Hakin9 November 2010
Tuesday, December 21, 2010 | 0 Comments
As with any new technology, time is needed for development and experience in tuning for good performance and acceptable stability. Recall, again, that Teradata has been providing massively parallel systems running a proprietary DBMS since 1984. The major DBMS vendors embarked on their versions towards the end of the 80s.
When used for decision support, parallel databases provide excellent results. On the other hand, their use for transaction processing has been less satisfactory.
The difficulty in obtaining a suitable scalability in transaction processing (a much larger portion of the market than decision support) explains their limited success. This is a real-world example of the difficulties faced in writing software for massively parallel architectures.
Source of Information : Elsevier Server Architectures 2005
Sunday, December 19, 2010 | 0 Comments
Hardware availability is just one, albeit important, factor in server availability. Over the past few years, hardware availability has increased because of technology improvements. Key factors in this are the increasing level of integration (which reduces the component count necessary to implement a system, as well as the number of connectors needed), improvements in data integrity, and the use of formal methods during the design of hardware components.
Systems based around standard technologies increasingly integrate functionality— such as system partitioning or the ability to swap subsystems “online,” i.e., without requiring that the system be brought down—that was until very recently the prerogative of mainframe systems and of “Fault Tolerant” systems. As a result of such advancements, it is possible to reach very high levels of hardware availability without entering the domain of specialized, expensive machinery.
On the other hand, it must be realized that software failures are more frequent than hardware failures. This trend is increasing. Hardware reliability keeps improving, but the amount of software in a system keeps increasing while its quality (as measured, for example, by the number of defects per thousand lines of code) shows little sign of improving. What matters, for a system, is total availability—a combination of hardware quality, the quality of software written or provided by the manufacturer, the quality of third-party software, and finally the quality of any application and/or operating procedures developed within the company. This last factor requires special attention—only too often, underestimating the importance of the quality of the in-house applications and/or operating procedures has its effect on the failure rate of the system.
It is appropriate to be very precise in any calculation concerning availability. To illustrate this point, consider a system that works 24 hours a day, 7 days a week, with an availability of 99.999%. This figure implies that the system is down no more than five minutes a year. Whether planned downtime is included in this budget makes a big difference in the difficulty of achieving this objective. Finally, we must not forget that any use of redundancy to improve availability tends to have an effect on performance.
Are RISC processors dead, killed by Intel?
Source of Information : Elsevier Server Architectures 2005
Saturday, December 18, 2010 | 0 Comments
Thursday, December 16, 2010 | 0 Comments
Electronic signals for wireless communication must be converted into electromagnetic waves by an antenna for transmission. Conversely, an antenna at the receiver side is responsible for converting electromagnetic waves into electronic signals. An antenna can be omnidirectional or directional, depending on specific usage scenarios. For an antenna to be effective, it must be of a size consistent with the wavelength of the signals being transmitted or received. Antennas used in cell phones are omnidirectional and can be a short rod on the handset or hidden within the handset. A recent advancement in antenna technology is the multiple-in, multiple out (MIMO) antenna, or smart antenna, which combines spatially, separated small antennas to provide high bandwidth without consuming more power or spectrum. To take advantage of multipath propagation, these small antennas must be separated by at least half of the wavelength of the signal being transmitted or received.
Wednesday, December 15, 2010 | 0 Comments
Evercookie stores cookie data in your browser in several ways—HTTP, Flash, force-cached PNG images, various HTML5 storage systems, Web history, and SQLite. If Evercookie detects that you’ve been deleting your cookies, the program re-creates them.
According to Threatpost, Evercookie author samy Kamkar, who spawned a Myspace worm in 2005, created the deletion-resistant cookie to increase public awareness of privacy issues raised by tracking cookies—whether traditional HTML or Flash. The opensource code is available at Kamkar’s Website for free downloading.
One way around Evercookie’s persistence is safari’s Private Browsing feature, which blocks all of the cookie’s methods. Other browsers might stand up to evercookie’s methods of cookie resuscitation, as well; Kamkar has not performed exhaustive testing.
Be careful about which browsers you accept cookies from. Keep tabs, too, on the developing HTML5 standard, which some critics say emphasizes functionality at the expense of security.
Sunday, December 12, 2010 | 0 Comments
Information Technology Cloud: Should You Police Your Community?: "The issue of negative comments is one that every brand who signs up for a Facebook Page has to deal with. Although this could be an issue within a Group, especially if it is around an industry topic that might cause some level of debate, you will more likely feel this concern around Facebook Pages. The reason being, of course, that the comments or content is public. It can be seen by all.
- Sent using Google Toolbar"
Wednesday, December 01, 2010 | 0 Comments