Security upgrades ongoing, but some argue more needs to be done.
The unusual activity began two weeks before the attack. Officials from the Cooperative Association for Internet Data Analysis, which had begun monitoring Internet nameserver behavior at the start of 2002, noticed varying levels of performance degradation in early October of that year. Little did they realize that on Oct. 21 they would witness a flood of ping messages on the Internet's 13 DNS root nameservers that would cause the most notorious denial-of-service attack on the Internet to this date.
"It was an attempt to make a massive problem," says KC Claffy, principal investigator at CAIDA . "They certainly made a blip on a graph."
But the Internet and its users got off easy. The barrage lasted only an hour, and no end users were affected.
The attack did, however, serve as a wake-up call, as network operators and others have taken steps to better secure the Internet since then. But some still question whether the Internet is susceptible to attack and needs more authoritative oversight.
"If somebody was to do a real concerted, knowledgeable attack, it wouldn't be very difficult to have a catastrophic impact on a huge component of commerce," says Larry Jarvis, vice president of network engineering at Fidelity Investments. "It would be huge to the U.S. economy and to a lot of companies that now view the Internet as the equivalent to a dedicated circuit to all these entities."
Clif Triplett, global technology information officer at General Motors, says he is worried mostly about router and host software bugs, as well as broadcast storms such as distributed DoS (DoS ) attacks bringing down the 'Net.
"I'm highly concerned about it," Triplett says. "If that network is a core piece of your business, I think you're at a risk."
These IT professionals are not alone. Two-thirds of the 1,300 "technology leaders, scholars and analysts" surveyed recently by the Pew Internet & American Life Project said they "expect a major attack on the Internet or the U.S. power grid within the next 10 years."
Experts warn that the 'Net is particularly vulnerable in these areas:
DNS root servers.
Border Gateway Protocol (BGP ) peering points.
Individual router and switch elements.
Host/endpoint operating systems.
The root of the problem
The 13 DNS root servers resolve Internet naming and addressing. If they were knocked out, Internet sites would become inaccessible.
The servers repel distributed DoS attacks every day, operators say. CAIDA research shows that up to 85% of the queries against the DNS servers are "bogus" or repeated from the same host.
The system has been bolstered since the 2002 attack, with root servers now consisting of 50 to 100 physically distributed, highly redundant boxes in 80 locations across 34 countries. In 2002, far fewer servers were located in 13 sites across four countries.
This level of distribution and redundancy makes a complete shutdown of the DNS system unlikely, says Paul Mockapetris, chairman and chief scientist of IP address management vendor Nominum and the inventor of DNS.
The physical servers use Anycast, a routing technique that heightens resiliency by multiplying the number of servers with the same IP address and balancing the load across an army of geographically dispersed systems.
"If I was going to try and arrange a DNS 9/11, it's a very bad target to try and attack because it's so distributed - you'd have to take [the servers] out everywhere," Mockapetris says. "If you took out one root server today, nobody would notice."
But the more distributed a system is, the more difficult it is to defend, notes Stephen Cobb, an independent security consultant who was recently quoted in a Network World column stating a belief that the 'Net can be brought down and kept down for 10 days or more. Cobb say the 'Net is up only because of the moral high ground of those who know how to bring it down.
"I just don't think technologically we can ever harden the Internet to where it's invulnerable to intelligent, determined people," he says. "The reason it hasn't gone down for days so far is that the people who know how to do it aren't so inclined."
However, the good guys are inclined to implement security best practices, like those outlined in an IETF informational document on root server operation called RFC 2870 , says Jose Nazario, security researcher and senior software engineer at Arbor Networks, which makes products carriers use to protect their networks from cyberattacks. Originally drafted in 2000, RFC 2870 has been extended over the past couple of years.
Even so, experts don't discount the possibility of another attack equal to or exceeding the scope of the October 2002 event. But they also are confident that the DNS root servers and Internet users will experience minimal disruption.
"There's no way to get them all with truck bombs; there's no way to get them all with a single attack; and there's no way to keep an attack going long enough that I could not usefully counteract it," says Paul Vixie, president of the Internet Systems Consortium, which also operates the DNS F root server. "It's better for me to simply not accept any traffic from [the attacker] even though I will be losing a certain number of Web hits. As soon as you rendered the attack worthless, then it's actually in the attacker's best interests to stop launching it because otherwise you will trace it back."
The Internet Corporation for Assigned Names and Numbers (ICANN ) is responsible for top-level coordination and global policy-making for the DNS, and plays a central role in assuring the integrity and stability of the system.
"Taking out the whole Internet for 10 days - I'm a little skeptical," says Steve Bellovin, a computer science professor at Columbia University, former researcher at AT&T Labs and a member of ICANN's Security and Stability Advisory committee. "If you look at the kinds of attacks we've had thus far - worms and [distributed] DoS attacks - many of these things have had noticeable impact in the short run but they weren't too hard to counter."
Routing around catastrophe
Bellovin and others are not as confident about the routing infrastructure. Cisco, the leading provider of Internet routers, regularly issues bug alerts. And BGP, which distributes routing information between networks on the Internet, is susceptible to IP address spoofing.
"BGP peering has some security problems," says Sam Hartman, area director for the IETF's Security Area working group. "What's there now is hard to configure, and it's something that the community has identified as a real problem. You're not just depending on the security of the person you're directly connected to; you're also depending to some extent on the security of the people that are connected to them."
Work has been underway for a while on methods to authenticate BGP route advertisements. Secure BGP (S-BGP) has been incubating for more than eight years and its alternative, Secure Origin BGP (soBGP), is also a multiyear effort. Yet these proposals are not implemented because router vendors have not incorporated them into their products - they say BGP already has enough integral security features that can be exploited through proper implementation.
"The workload gets significantly higher, and it's kind of a turnoff for the people who are not major core operators," Arbor's Nazario says.
Many ISPs implement TCP MD5 cryptographic hashing (RFC 2385) to authenticate BGP data. But it's not a mandate. Operators can choose not to turn on the techniques for various reasons, such as router performance degradation.
"But [MD5] is easy to deploy in a hurry if the link starts being attacked," says Scott Bradner, university technology security officer at Harvard University and a network design and security consultant. Bradner is also a Network World columnist .
IPSec also can be used as an alternative to MD5 to add some level of protection to the BGP transport connection, experts say. Operators can implement infrastructure access control lists, BGP/Generalized TTL Security Hack - which is designed to protect against CPU overload-based attacks - prefix filters and priority queuing for control plane traffic, they say.
There also is an informational IETF document - RFC 3882 - on configuring BGP to block DoS attacks.
Hardware needs hardening
Routers themselves also are patched quickly when software bugs are discovered, Bradner says, despite - and thanks to - the frequency at which they occur. Cisco has regularly reported distributed DoS vulnerabilities in its IOS software over the years. But the fact that the vendor has reported them and recommended patches in a timely manner has helped keep disruptive events to a minimum.
Still, that's little solace to GM's Triplett. He says more and more telecom operators run the latest versions of routing software not only to get new features but also to maintain release consistency to better alleviate bugs.
But the latest software is usually the buggiest - the Release 1.0 conundrum.
"This is kind of a Catch-22 situation," Triplett says. "All of a sudden, if they all get on the same release . . . you can almost start having an effect similar to what we saw on the power grid" in the Northeast two years ago, with the ripple effect electrical blackout.
For that reason, Triplett and other experts consider the 'Net's routing infrastructure - the BGP protocol and the routers themselves - to be its most vulnerable parts. Work continues to improve routing security through the IETF's Routing Protocol Security (RPSec ) working group, which has published new documents on generic threats and security requirements for routing protocols, and Open Shortest Path First vulnerabilities within the past six months.
RPSec plans to continue to evaluate and document current and proposed routing security mechanisms. Meanwhile, U.S. CERT under the Department of Homeland Security continues to post vulnerability alerts on Cisco and Juniper routers, in addition to other cyberthreats.
Software bugs also are a problem for Internet hosts and endpoints. Indeed, the majority of worms and other successful cyberattacks are made possible by vulnerabilities in a small number of common operating system services on Internet hosts, according to The SANS Institute , a security training and certification organization that annually publishes a Top 20 Internet security vulnerability list.
"If you want to hurt the network you attack the routers; but if you want to hurt the people using the network, then the operating systems right now are the main attack vector," says Alan Paller, director of research at SANS.
The spread of infamous worms such as Blaster, Slammer and Code Red can be traced directly to exploitation of unpatched vulnerabilities, according to SANS. Attackers scanning the Internet for vulnerable systems count on major corporations not fixing the problems.
But the problems are not theirs to fix, Paller says.
"Vendors have complete responsibility," he says, adding that product vendors and ISPs should work more closely to better secure host operating systems.
The operating system vulnerabilities have minimal effect on the security of the 'Net infrastructure, as Paller noted. However, they serve as the primary attack vehicle for those looking to disrupt specific sites.
With that, SANS publishes patches and workarounds for the Top 20 vulnerabilities. Also, Microsoft continues to work on Windows patch management tools, code to thwart worms and hackers, and acquisitions of anti-virus, anti-spyware and anti-spam companies.
Microsoft also has offered to work more closely with governments around the world on detecting and mitigating IT security threats.
Meanwhile, open source developers and vendors continue to develop their own patch management tools.
Internet watchers say the network of networks remains vulnerable to attack but is in better shape than it was two-and-a-half years ago.
"There are a number of things that could have a multi-hour major impact, but I doubt very much that there is anything that would have even as much as a day's impact over any significant chunk of the 'Net," Harvard's Bradner says.
Those operating and securing the Internet insist it's no more vulnerable than any other business-critical infrastructure.
If the 'Net went down, "it would be another disaster, just like many of the natural disasters," IETF's Hartman says. "But business is about managing those risks."
Still, it's a risk that perhaps warrants more continual attention than any other.
"I just think we're putting a lot of eggs into a basket that doesn't have enough control around it," Fidelity's Jarvis says.