Showing posts with label Security. Show all posts
Showing posts with label Security. Show all posts

WEB BROWSER DRIVE - BY EXPLOITS ON THE WILD

Client side exploits are the real concern of security staffs of every company worldwide. As reported by Neil Daswani, CTO and founder of Dasient, in OWASP AppSec DC conference, an incredible growth in the number of exploits against client applications versus server daemons demonstrates that the weakest link is still the end-user. Moreover, it proves to be hard to deploy a corporate wide policy to mitigate the use of, or apply patches for vulnerable applications, when a 0-day is released every other week against common applications such as Adobe Reader, Flash Player or Mozilla Firefox. The most targeted among these client applications are web browsers and their plugins. By means of drive by download exploits, botnets, easily recruit new zombies, by silently downloading and installing malware without ever rising any suspicion in the victim. These drive by exploits have become more and more complex in terms of distribution and obfuscation. Most of them involve Javascript and iFrame injection. Others involve exploitation of the latest Flash player vulnerabilities.

Types of Malware

Malicious software comes in many forms. All forms have certain things in common, though. For one, they’re invisible — you don’t even know they’re there. For another, they all do something bad, something you don’t really want happening on your computer. Third, they’re all written by human programmers to intentionally do these bad things. The differences have to do with how they spread and what they do after they’re on your computer. I tell you about the differences in the sections to follow.



Viruses and worms
Viruses and worms are self-replicating programs that spread from one computer to the next, usually via the Internet. A virus needs a host file to spread from one computer to the next. The host file can be anything, though viruses are typically hidden in e-mail attachments and programs you download.

A worm is similar to a virus in that it can replicate itself and spread. However, unlike a virus, a worm doesn’t need a host file to travel around. It can go from one computer to the next right though your Internet connection. That’s one reason it’s important to always have a firewall up when you’re online — to keep out worms that travel through Internet connections.

The harm caused by viruses and worms ranges from minor pranks to serious damage. A minor prank might be something like a small message that appears somewhere on your screen where you don’t want it. A more serious virus might erase important files, or even try to erase all your files, rendering your computer useless.



Spyware and adware
Spyware and adware is malware that’s not designed to specifically harm your computer. Rather, it’s designed to help people sell you stuff. A common spyware tactic is to send information about the Web sites you visit to computers that send out advertisements on the Internet. That computer analyzes the Web sites you visit to figure out what types of products you’re most likely to buy. That computer then sends ads about such products to your computer.

Adware is the mechanism that allows ads to appear on your computer screen. When you get advertisements on your screen, seemingly out of the clear blue sky, there’s usually some form of adware behind it. Spyware and adware often work in conjunction with one another. The adware provides the means to display ads. The spyware helps the ad server (the computer sending the ads) choose ads for products you’re most likely to buy.



Trojan horses and rootkits
You may have heard the term Trojan horse in relation to early mythology. The story goes like this. After 10 years of war with the city of Troy, the Greeks decided to call it quits. As a peace offering, they gave to the people of Troy a huge horse statue named the Trojan horse. While the people of Troy were busy celebrating the end of the war, Greek soldiers hidden inside the horse snuck out and opened the gates to the city from inside. This allowed other Greek soldiers, lying in wait hidden outside the city, to storm into the town and conquer it. (This is definitely a case in which it would have been wise to look a gift horse in the mouth.)

A Trojan horse is a program that works in a similar manner. In contrast to other forms of malware, a Trojan horse is a program you can actually see on your screen and use. On the surface, it does do something useful. However, hidden inside the program is some smaller program that does bad things, usually without your knowledge.

A Trojan horse can also be a program that hides nothing but could be used in bad ways. Take, for example, a program that can recover lost passwords. On the one hand, it can be a good thing if you use it to recover forgotten passwords from files you created yourself. But it can be a bad thing when used to break into other people’s password-protected files. A rootkit is a program that is capable of hiding itself, and the malicious intent of other programs, from the user and even from the system. As with Trojan horses, not all rootkits are inherently malicious. However, they can certainly be used in malicious ways. Windows 7 protects your system from rootkits on many fronts, including Windows Defender.

Source of Information : Windows 7 Bible (2009)

Building a Secure Squid Web Proxy

What Exactly Is a Web Proxy?
The concept of a Web proxy is simple. Rather than allowing client systems to interact directly with Web servers, a Web proxy impersonates the server to the client, while simultaneously opening a second connection to the Web server on the client’s behalf and impersonating the client to that server.

Because Web proxies have been so common for so long, all major Web browsers can be configured to communicate directly through Web proxies in a “proxy-aware” fashion. Alternatively, many Web proxies support “transparent” operation, in which Web clients are unaware of the proxy’s presence, but their traffic is diverted to the proxy via firewall rules or router policies.


Why Proxy?
Just because nowadays it’s easy to interconnect TCP/IP networks directly doesn’t mean you always should. If a nasty worm infects systems on your internal network, do you want to deal with the ramifications of the infection spreading outward, for example, to some critical business partner with whom your users communicate over the Internet? In many organizations, network engineers take it for granted that all connected systems will use a “default route” that provides a path out to the Internet. In other organizations, however, it’s considered much less risky to direct all Web traffic out through a controlled Web proxy to which routes are internally published and to use no default route whatsoever at the LAN level.

This has the effect of allowing users to reach the Internet via the Web proxy—that is, to surf the Web—but not to use the Internet for non-Web applications, such as IRC, on-line gaming and so forth. It follows that what end users can’t do, neither can whatever malware that manages to infect their systems. Obviously, this technique works only if you’ve got other types of gateways for the non-Web traffic you need to route outward, or if the only outbound Internet traffic you need to deal with is Web traffic. My point is, a Web proxy can be a very useful tool in controlling outbound Internet traffic.

What if your organization is in a regulated industry, in which it’s sometimes necessary to track some users’ Web access? You can do that on your firewall, of course, but generally speaking, it’s a bad idea to make a firewall log more than you have to for forensics purposes. This is because logging is I/O-intensive, and too much of it can impact negatively the firewall’s ability to fulfill its primary function, assessing and dealing with network transactions. (Accordingly, it’s common practice mainly to log “deny/reject” actions on firewalls and not to log “allowed” traffic except when troubleshooting.) A Web proxy, therefore, provides a better place to capture and record logs of Web activity than on firewalls or network devices. Another important security function of Web proxies is blacklisting. This is an unpleasant topic—if I didn’t believe in personal choice and freedom, I wouldn’t have been writing about open-source software since 2000—but the fact is that many organizations have legitimate, often critical, reasons for restricting their users’ Web access.

A blacklist is a list of forbidden URLs and name domains. A good blacklist allows you to choose from different categories of URLs to block, such as social networking, sports, pornography, known spyware-propagators and so on. Note that not all blacklist categories necessarily involve restricting personal freedom per se; some blacklists provide categories of “known evil” sites that, regardless of whatever content they’re actually advertising, are known to try to infect users with spyware or adware, or otherwise attack unsuspecting visitors. And, I think a lot of Web site visitors do tend to be unsuspecting. The classic malware vector is the e-mail attachment—an image or executable binary that you trick the recipient into double-clicking on. But, what if you could execute code on users’ systems without having to trick them into doing anything but visit a Web page?

In the post-Web 2.0 world, Web pages nearly always contain some sort of executable code (Java, JavaScript, ActiveX, .NET, PHP and so on), and even if your victim is running the best antivirus software with the latest signatures, it won’t examine any of that code, let alone identify evil behavior in it. So, sure enough, the “hostile Web site” has become the cutting edge in malware propagation and identity theft.

Phishing Web sites typically depend on DNS redirection (usually through cache poisoning), which involves redirecting a legitimate URL to an attacker’s IP address rather than that site’s real IP, so they’re difficult to protect against in URL or domain blacklists. (At any rate, none of the free blacklists I’ve looked at include a phishing category.) Spyware, however, is a common blacklist category, and a good blacklist contains thousands of sites known to propagate client-side code you almost certainly don’t want executed on your users’ systems. Obviously, no URL blacklist ever can cover more than a tiny fraction of the actual number of hostile Web sites active at any given moment. The real solution to the problem of hostile Web sites is some combination of client/endpoint security controls, better Web browser and operating system design, and in advancing the antivirus software industry beyond its reliance on virus signatures (hashes of known evil files), which it’s been stuck on for decades.

Nevertheless, at this very early stage in our awareness of and ability to mitigate this type of risk, blacklists add some measure of protection where presently there’s very little else. So, regardless of whether you need to restrict user activity per se (blocking access to porn and so forth), a blacklist with a well-maintained spyware category may be all the justification you need to add blacklisting capabilities to your Web proxy. SquidGuard can be used to add blacklists to the Squid Web proxy. If you’re serious about blocking access to sites that are inappropriate for your users, blacklisting is an admittedly primitive approach. Therefore, in addition to blacklists, it makes sense to do some sort of content filtering as well—that is, automated inspection of actual Web content (in practice, mainly text) to determine its nature and manage it accordingly. DansGuardian is an open-source Web content filter that even has antivirus capabilities. What if you need to limit use of your Web proxy, but for some reason, can’t use a simple source-IPaddress-based Access Control List (ACL)? One way to do this is by having your Web proxy authenticate users. Squid supports authentication via a number of methods, including LDAP, SMB and PAM.

However, I’m probably not going to cover Web proxy authentication here any time soon—802.1x is a better way to authenticate users and devices at the network level. Route-limiting, logging, blacklisting and authenticating are all security functions of Web proxies. I’d be remiss, however, not to mention the main reason many organizations deploy Web proxies, even though it isn’t directly security-related—performance. By caching commonly accessed files and Web sites, a Web proxy can reduce an organization’s Internet bandwidth usage significantly, while simultaneously speeding up end-users’ sessions. Fast and effective caching is, in fact, the primary design goal for Squid, which is why some of the features I’ve discussed here require add-on utilities for Squid (for example, blacklisting requires SquidGuard).


Web Proxy Architecture
Suppose you find all of this very convincing and want to use a Web proxy to enforce blacklists and conserve Internet bandwidth. Where in your network topology should the proxy go? Unlike a firewall, a Web proxy doesn’t need to be, nor should it be, placed “in-line” as a choke point between your LAN and your Internet’s uplink, although it is a good idea to place it in a DMZ network. If you have no default route, you can force all Web traffic to exit via the proxy by a combination of firewall rules, router ACLs and end-user Web browser configuration settings.

Firewall 1 allows all outbound traffic to reach TCP port 3128 on the proxy in the DMZ. It does not allow any outbound traffic directly from the LAN to the Internet. It passes only packets explicitly addressed to the proxy. Firewall 2 allows all outbound traffic on TCP 80 and 443 from the proxy (and only from the proxy) to the entire Internet. Because the proxy is connected to a switch or router in the DMZ, if some emergency occurs in which the proxy malfunctions but outbound Web traffic must still be passed, a simple firewall rule change can accommodate this. The proxy is only a logical control point, not a physical one. Note also that this architecture could work with transparent proxying as well, if Firewall 1 is configured to redirect all outbound Web transactions to the Web proxy, and Firewall 2 is configured to redirect all inbound replies to Web transactions to the proxy. You may be wondering, why does the Web proxy need to reside in a DMZ? Technically, it doesn’t. You could put it on your LAN and have essentially identical rules on Firewalls 1 and 2 that allow outbound Web transactions only if they originate from the proxy.

But, what if some server-side attacker somehow manages to get at your Web proxy via some sort of “reverse-channel” attack that, for example, uses an unusually large HTTP response to execute a buffer-overflow attack against Squid? If the Web proxy is in a DMZ, the attacker will be able to attack systems on your LAN only through additional reverse-channel attacks that somehow exploit user-initiated outbound connections, because Firewall 1 allows no DMZ-originated, inbound transactions. It allows only LAN-originated, outbound transactions.

In contrast, if the Web proxy resides on your LAN, the attacker needs to get lucky with a reverse-channel attack only once and can scan for and execute more conventional attacks against your internal systems. For this reason, I think Web proxies are ideally situated in DMZ networks, although I acknowledge that the probability of a well-configured, well-patched Squid server being compromised via firewall-restricted Web transactions is probably low.


Yet to Come in This Series
I’ve explained (at a high level) how Web proxies work, described some of their security benefits and shown how they might fit into one’s perimeter network architecture. What, exactly, will we be doing in subsequent articles? First, we’ll obtain and install Squid and create a basic configuration file. Next, we’ll “harden” Squid so that only our intended users can proxy connections through it. Once all that is working, we’ll add SquidGuard for blacklisting, and DansGuardian for content filtering. I’ll at least give pointers on using other add-on tools for Squid administration, log analysis and other useful functions. Next month, therefore, we’ll roll up our sleeves and plunge right in to the guts of Squid configuration and administration. Until then, be safe!
Source of Information : Linux Journal Issue 180 April 2009

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...