Error 404--Not Found
From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
10.4.5 404 Not Found
The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.
If the server does not wish to make this information available to the client, the status code 403 (Forbidden) can be used instead. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address.
Imperva, a start-up co-founded by CEO Schlomo Kramer, also co-founder of Check Point Software Technologies Ltd., has an application-layer firewall appliance out this week of which we make note. One of the other interesting things Imperva has done lately is to publish a research paper entitled Web Application Worms: Myth or Reality? This inconclusive title understates the compelling argument the paper makes: that it's possible for an attacker to exploit a powerful search engine, such as Google, to launch an automated hack because Google is so good at finding holes in corporate networks.
Google has officially had little it wanted to say about the Imperva paper, except to object to Imperva's use of the term "war googling" to describe an attack made via an online search engine.
"Google asked us to call it 'war searching,' not 'war googling,'" notes Kramer, who said Imperva obliged Google in that regard in a revised version of the paper.
In response to an inquiry from Network World, Google said it didn't give Imperva's ideas about "war googling" much credence and declined to discuss it further.
The term "war googling" is not new, and has been used before by security experts who are aware that powerful search engines such as Google can unfortunately be exploited for nefarious purposes.
Google and other search engines map a given Web site thoroughly in their search for content using so-called "WebBots" (GoogleBot is Google's WebBot). Sometimes maybe a little too thoroughly for the health of the Web site's owner who may have left a few holes unplugged in it.
The Imperva analysis explains: "As a consequence, once a site has been 'discovered' by a search engine, the WebBot operating on behalf of a search engine will discover all publicly available URLs at the site, including URLs that were not intentionally exposed to the public."
Imperva's paper goes on to describe how an attacker can refine a search to expose database vulnerabilities, for instance, and then automate an attack with "one of the many tools that can be obtained from the Internet that is capable of efficiently achieving actual SQL injection given a URL and its parameters."
In theory, this could all culminate in a so-called "Search of Death," as the Imperva paper calls it, in which an anonymous attacker, using the search Web as the intermediary, sets up an anonymous Web site using one of the many free hosting services.
"They then submit the site to a search engine for indexing. When they observe that the search engine has paid them a visit (e.g., by inserting rare terms within the content of the site and searching for them in the search engine) they create a new page which contains the attack URLs."
The next step would be having the WebBot again pay a visit in which it follows the links in the new page and indexes the results. "Following the links in the malicious page means that the WebBot will launch the attack URL against the target site."
In this situation, "Google is essentially launching these attacks as part of its regular indexing," Kramer explains. "The problem is not with Google. The problem is with the applications. The responsibility is in fixing the applications."
Imperva, which describes a few variations on this "Search of Death" idea in its discussion paper, says it doesn't know for sure if this has already been carried out in actual attacks on Web sites. "But we have tested it in a contained environment in our lab," Kramer adds, and Imperva is certain it can be.
Post a comment
TOP STORIES of 2013