Sharpen your pencils: It's time for Web Application Security 101.
A traditional firewall is commonly employed to restrict Web site access to Ports 80 and 443, used for HTTP and Secure Sockets Layer communications, respectively. However, such a device does very little to deter attacks that come over these connections. URL query string manipulations including SQL injection, modification of cookie values, tampering of form field data, malformed requests and a variety of other nasty tricks are often given free passage on allowed, legitimate traffic.
A Web application firewall, such as those reviewed in this issue (see review) might help address security holes in Web servers and Web applications, but there is certainly a great deal that network security professional could and should do before and after employing such measures.
So sharpen your pencils: It's time for Web Application Security 101.
Tip 1: Don't trust, authenticate.
If you are in charge of designing or administrating a public Web site, you need to embrace the fact that you cannot trust your users. If you are particularly paranoid, you might extend this concept to an extranet or even an internal site. But the point is that unless the users authenticate themselves with the site somehow, you have no idea who they are and what their intentions might be.
Not to suggest that a hacker hides behind every IP address accessing your site, but can you easily separate legitimate traffic from non-legitimate traffic? Are those excessive 404 errors in your server log simple mistakes or someone probing your defenses? You should always err on the side of caution, and the tips that follow embrace this spirit.
Tip 2: Keep a low profile.
The first step for a potential intruder is to gather information about your Web server and any hosted application. Don't expose anything your end users don't need to know and consider the following simple anti-reconnaissance tactics:
• Remove personal information from your WHOIS records that might be useful in a social engineering attack and employ a role account instead.
• Make sure your machine is not named something that indicates its operating system or version.
• Remove the server header from your Web server's response.
• Remap file extensions of dynamic pages, for example .jsp to .shtm.
• Add custom error pages that suppress useful information about the server or associated development platform.
• Do not expose sensitive file or directory names in robots.txt file.
You can go deeper with anti-reconnaissance by tweaking your network firewall and server connection settings to fool tools such as NMAP (www.insecure.org) that will try to identify your server via its TCP stack responses. At the HTTP level, you might consider changing your Web server's responses to alter header order, mask session cookie names, and remove other items in the response. A tool such as ServerMask for Internet Information Systems can help you perform many of these masking tricks.
Obviously, the competent Web administrator does not solely embrace security by obscurity. True protection is required. However, inviting attack to test your site's "armor" is foolish; the aim is only to keep potential attackers from easily sizing up defenses and attacking more successfully by giving the site and server the equivalent of camouflage.
Tip 3: Use misdirection and misinformation beyond reducing information exposure.
You should consider using misinformation and misdirection in what you do reveal. Looking like another type of server, pretending to use a different technology or giving contradictory information can trip an attacker into making the wrong types of attacks and clearly signaling his intention. For example, you might add fake "off- limits" directories or file names in a site's robots.txt, comments or error pages so that users or tools with bad intent reveal themselves for monitoring or blocking. Other examples of misdirection include:
• Randomized network and HTTP server signatures found in the response packets.
• False administrator names in page comments or network records that are known internally when used to be indicative of a social engineering attack in progress.
• Decoy servers or honeypots (www.honeypots.org) to confuse intruders.
• Send varying error responses or make your site "play dead" by sending obvious intruder "500 Server Error" responses for all their requests.
There is a great deal of room to expand on the idea of misdirection. Creating a forest of decoy devices and sites that rotate their signatures could make finding your site a great pain for a potential intruder. A service such as Netbait suggest such thinking is not so wild.
Yet be careful - camouflage will not protect problems, and misdirection might anger an enemy inviting attack. In many cases the tactics will be useless against the "stupid" attack from a robot, worm or script kiddies following a canned script. These folks don't care what they are hitting and hit Apache boxes with IIS attacks and vice versa, so make sure you can handle what they throw at you.
Tip 4: Forcefully deny bad requests.
A user's request just might not be safe to execute. Simple attacks focus on trying to modify the HTTP request to cause something bad to happen. You can use an application firewall or server filter to eliminate bad HTTP requests such as very long URIs, funny characters, unsupported methods and headers, and any other obviously malformed requests.
You should be aware of the types of data and programs in your site. If you know what is allowed, anything else should be disallowed - the so-called positive model. For example, requests for Active Server Pages files in a site built in PHP are problematic. Make sure to purge all unused files, particularly backup files (.bak). Turn off your server's directory browsing option. And remove any unused extensions from your server's configuration.
Tip 5: Sanitize user requests and inputs
Hidden form fields and cookies also serve as inputs that you should be careful to monitor. Avoid putting sensitive data in, and consider adding a checksum to verify they have not been tampered with. Be particularly careful in the case of session cookies. If the form is too predictable, your application might be open to a cookie hijacking attack.
When application flow is important, make sure you check referring URLs and deny any page requests out of sequence. To signal problems, you can add extra, encrypted cookie information to indicate entry point and last page visited.
Tip 6: Monitor and test continuously.
If you are examining logs only when things go wrong, you aren't doing enough. Many times it's already too late and logs provide only forensics to help you try to reconstruct the crime or help patch the hole. Fortunately, spotting a problem more quickly isn't hard because application attacks are clearly recorded in your server access log, and unless the compromise gives the attacker server-level access, they won't be able to cover their tracks easily. However, as a precaution, you might consider multiple logging hosts and using on and off network monitoring of your site and applications.
While application attacks are often more difficult than network intrusions for an intruder to cover up, sorting the bad requests from the good can be hard. To narrow a log down, try filtering on unknown user agents, unresolvable IP addresses and very fast requests from one source. Pay attention to your server's error log and look at 404 requests: They are often not simple mistakes but failed exploits or probes.
Make sure you test your site using the various vulnerability tools such as NStealth (www.nstalker.com) to find and plug obvious holes, but embrace the fact that "zero-day" attacks will continue and an as of yet indefensible attack might occur.
Tip 7: Prepare for the worst.
Despite your best efforts, someone might compromise your Web server or application. Rather than ignoring that possibility, you should come up with a plan to address a variety of compromises, including:
• Server compromise.• Site defacement.• Application-level denial of service (DoS).• Sensitive data exposure.
In the case of server compromise, rolling back to a former state, going off-line and trying to plug holes are really your only choices. Similarly, when faced with site defacement you want to be able to roll back the site quickly or put a standby page in place. Dealing with defacement isn't hard, but how can you detect it rapidly? A blatant home page modification by an intruder is obvious, but without page checksums detecting minor data modifications might be difficult. Imagine the damage done by the alteration of a financial press release on a corporate site?
DoS at the network level is a known attack and can be dealt with by many devices, but application-level DoS is more difficult to deal with. With the potential for a robot attack using apparently legitimate HTTP traffic from open proxies all over the Internet, it might be very difficult to determine the good users from the bad. Work still needs to be done in this area, but actively monitoring site traffic is an important first step.
Sensitive data exposure - such as the revelation of customer data including credit card numbers, for example - can be difficult to catch. Security software and devices such as the Teros offering (see story) can monitor pages for sensitive data patterns and block the data from being revealed. However, active monitoring is really the best bet because what is sensitive might not always be as obvious as a Social Security or credit card number.
Tip 8: Cross the developer-administrator chasm.
The greatest challenge in Web application security is that often the person who has built the application is not in charge of securing the application. Without intimate knowledge of the workings of a Web site, it might be difficult for an administrator to secure it adequately. On the flip side, developers are likely unaware of the types of attacks that occur and, therefore, don't write their code to address them. Getting the two groups together to share knowledge is truly the ultimate weapon against Web application security problems.