Building a Secure Squid Web Proxy

What Exactly Is a Web Proxy?
The concept of a Web proxy is simple. Rather than allowing client systems to interact directly with Web servers, a Web proxy impersonates the server to the client, while simultaneously opening a second connection to the Web server on the client’s behalf and impersonating the client to that server.

Because Web proxies have been so common for so long, all major Web browsers can be configured to communicate directly through Web proxies in a “proxy-aware” fashion. Alternatively, many Web proxies support “transparent” operation, in which Web clients are unaware of the proxy’s presence, but their traffic is diverted to the proxy via firewall rules or router policies.


Why Proxy?
Just because nowadays it’s easy to interconnect TCP/IP networks directly doesn’t mean you always should. If a nasty worm infects systems on your internal network, do you want to deal with the ramifications of the infection spreading outward, for example, to some critical business partner with whom your users communicate over the Internet? In many organizations, network engineers take it for granted that all connected systems will use a “default route” that provides a path out to the Internet. In other organizations, however, it’s considered much less risky to direct all Web traffic out through a controlled Web proxy to which routes are internally published and to use no default route whatsoever at the LAN level.

This has the effect of allowing users to reach the Internet via the Web proxy—that is, to surf the Web—but not to use the Internet for non-Web applications, such as IRC, on-line gaming and so forth. It follows that what end users can’t do, neither can whatever malware that manages to infect their systems. Obviously, this technique works only if you’ve got other types of gateways for the non-Web traffic you need to route outward, or if the only outbound Internet traffic you need to deal with is Web traffic. My point is, a Web proxy can be a very useful tool in controlling outbound Internet traffic.

What if your organization is in a regulated industry, in which it’s sometimes necessary to track some users’ Web access? You can do that on your firewall, of course, but generally speaking, it’s a bad idea to make a firewall log more than you have to for forensics purposes. This is because logging is I/O-intensive, and too much of it can impact negatively the firewall’s ability to fulfill its primary function, assessing and dealing with network transactions. (Accordingly, it’s common practice mainly to log “deny/reject” actions on firewalls and not to log “allowed” traffic except when troubleshooting.) A Web proxy, therefore, provides a better place to capture and record logs of Web activity than on firewalls or network devices. Another important security function of Web proxies is blacklisting. This is an unpleasant topic—if I didn’t believe in personal choice and freedom, I wouldn’t have been writing about open-source software since 2000—but the fact is that many organizations have legitimate, often critical, reasons for restricting their users’ Web access.

A blacklist is a list of forbidden URLs and name domains. A good blacklist allows you to choose from different categories of URLs to block, such as social networking, sports, pornography, known spyware-propagators and so on. Note that not all blacklist categories necessarily involve restricting personal freedom per se; some blacklists provide categories of “known evil” sites that, regardless of whatever content they’re actually advertising, are known to try to infect users with spyware or adware, or otherwise attack unsuspecting visitors. And, I think a lot of Web site visitors do tend to be unsuspecting. The classic malware vector is the e-mail attachment—an image or executable binary that you trick the recipient into double-clicking on. But, what if you could execute code on users’ systems without having to trick them into doing anything but visit a Web page?

In the post-Web 2.0 world, Web pages nearly always contain some sort of executable code (Java, JavaScript, ActiveX, .NET, PHP and so on), and even if your victim is running the best antivirus software with the latest signatures, it won’t examine any of that code, let alone identify evil behavior in it. So, sure enough, the “hostile Web site” has become the cutting edge in malware propagation and identity theft.

Phishing Web sites typically depend on DNS redirection (usually through cache poisoning), which involves redirecting a legitimate URL to an attacker’s IP address rather than that site’s real IP, so they’re difficult to protect against in URL or domain blacklists. (At any rate, none of the free blacklists I’ve looked at include a phishing category.) Spyware, however, is a common blacklist category, and a good blacklist contains thousands of sites known to propagate client-side code you almost certainly don’t want executed on your users’ systems. Obviously, no URL blacklist ever can cover more than a tiny fraction of the actual number of hostile Web sites active at any given moment. The real solution to the problem of hostile Web sites is some combination of client/endpoint security controls, better Web browser and operating system design, and in advancing the antivirus software industry beyond its reliance on virus signatures (hashes of known evil files), which it’s been stuck on for decades.

Nevertheless, at this very early stage in our awareness of and ability to mitigate this type of risk, blacklists add some measure of protection where presently there’s very little else. So, regardless of whether you need to restrict user activity per se (blocking access to porn and so forth), a blacklist with a well-maintained spyware category may be all the justification you need to add blacklisting capabilities to your Web proxy. SquidGuard can be used to add blacklists to the Squid Web proxy. If you’re serious about blocking access to sites that are inappropriate for your users, blacklisting is an admittedly primitive approach. Therefore, in addition to blacklists, it makes sense to do some sort of content filtering as well—that is, automated inspection of actual Web content (in practice, mainly text) to determine its nature and manage it accordingly. DansGuardian is an open-source Web content filter that even has antivirus capabilities. What if you need to limit use of your Web proxy, but for some reason, can’t use a simple source-IPaddress-based Access Control List (ACL)? One way to do this is by having your Web proxy authenticate users. Squid supports authentication via a number of methods, including LDAP, SMB and PAM.

However, I’m probably not going to cover Web proxy authentication here any time soon—802.1x is a better way to authenticate users and devices at the network level. Route-limiting, logging, blacklisting and authenticating are all security functions of Web proxies. I’d be remiss, however, not to mention the main reason many organizations deploy Web proxies, even though it isn’t directly security-related—performance. By caching commonly accessed files and Web sites, a Web proxy can reduce an organization’s Internet bandwidth usage significantly, while simultaneously speeding up end-users’ sessions. Fast and effective caching is, in fact, the primary design goal for Squid, which is why some of the features I’ve discussed here require add-on utilities for Squid (for example, blacklisting requires SquidGuard).


Web Proxy Architecture
Suppose you find all of this very convincing and want to use a Web proxy to enforce blacklists and conserve Internet bandwidth. Where in your network topology should the proxy go? Unlike a firewall, a Web proxy doesn’t need to be, nor should it be, placed “in-line” as a choke point between your LAN and your Internet’s uplink, although it is a good idea to place it in a DMZ network. If you have no default route, you can force all Web traffic to exit via the proxy by a combination of firewall rules, router ACLs and end-user Web browser configuration settings.

Firewall 1 allows all outbound traffic to reach TCP port 3128 on the proxy in the DMZ. It does not allow any outbound traffic directly from the LAN to the Internet. It passes only packets explicitly addressed to the proxy. Firewall 2 allows all outbound traffic on TCP 80 and 443 from the proxy (and only from the proxy) to the entire Internet. Because the proxy is connected to a switch or router in the DMZ, if some emergency occurs in which the proxy malfunctions but outbound Web traffic must still be passed, a simple firewall rule change can accommodate this. The proxy is only a logical control point, not a physical one. Note also that this architecture could work with transparent proxying as well, if Firewall 1 is configured to redirect all outbound Web transactions to the Web proxy, and Firewall 2 is configured to redirect all inbound replies to Web transactions to the proxy. You may be wondering, why does the Web proxy need to reside in a DMZ? Technically, it doesn’t. You could put it on your LAN and have essentially identical rules on Firewalls 1 and 2 that allow outbound Web transactions only if they originate from the proxy.

But, what if some server-side attacker somehow manages to get at your Web proxy via some sort of “reverse-channel” attack that, for example, uses an unusually large HTTP response to execute a buffer-overflow attack against Squid? If the Web proxy is in a DMZ, the attacker will be able to attack systems on your LAN only through additional reverse-channel attacks that somehow exploit user-initiated outbound connections, because Firewall 1 allows no DMZ-originated, inbound transactions. It allows only LAN-originated, outbound transactions.

In contrast, if the Web proxy resides on your LAN, the attacker needs to get lucky with a reverse-channel attack only once and can scan for and execute more conventional attacks against your internal systems. For this reason, I think Web proxies are ideally situated in DMZ networks, although I acknowledge that the probability of a well-configured, well-patched Squid server being compromised via firewall-restricted Web transactions is probably low.


Yet to Come in This Series
I’ve explained (at a high level) how Web proxies work, described some of their security benefits and shown how they might fit into one’s perimeter network architecture. What, exactly, will we be doing in subsequent articles? First, we’ll obtain and install Squid and create a basic configuration file. Next, we’ll “harden” Squid so that only our intended users can proxy connections through it. Once all that is working, we’ll add SquidGuard for blacklisting, and DansGuardian for content filtering. I’ll at least give pointers on using other add-on tools for Squid administration, log analysis and other useful functions. Next month, therefore, we’ll roll up our sleeves and plunge right in to the guts of Squid configuration and administration. Until then, be safe!
Source of Information : Linux Journal Issue 180 April 2009

No comments:

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...