Judging by initial appearances, our security testing turned up a ton of vulnerabilities – nearly 150 of them. In reality, however, none represented actual issues in the Huawei switch.
Both the security test tools we used – Spirent’s Mu-8010 and Rapid7’s Metasploit – produce nicely formatted reports explaining each attack. The reports go into detail as to what each attack does, and why the vulnerability would be a problem. “Would be” is the operative phrase here; the reports do not document what actually happened on the switch.
Spirent’s Mu tool performs health checks after each SSH fuzzing attack, sending a new, valid SSH connection request to determine if the daemon is still open for business. If the device doesn’t respond to the connection request, the Spirent tool flags this as a fault, and tries again. The greater the number of failed retry requests, the higher the confidence level in the fault shown in the Spirent report. Metasploit’s reporting is simpler; it tabulates on how many vulnerabilities and cracked passwords it found.
The issue with both approaches is that they report only from the tester’s side, not from the switch. Suppose, for example, an SSH overflow attack caused the SSH daemon to be unresponsive for a couple of seconds. That would get flagged as a fault, but it’s not the same thing as saying an overflow actually occurred.
The key here is checking the switch, not just the test tool. In the case of a successful overflow attack, logging or other monitoring would show that the attack resulted in privilege escalation, a system hang, or other problems. That kind of checking is essential to complete the picture. That’s why we ran additional monitoring, external to our security test tools, to verify the switch’s control- and data-plane availability.
Thus, none of the many faults found in security testing were actual vulnerabilities. All they indicated was that the switch’s SSH server was busy – and a daemon that can’t do additional work shouldn’t answer new connection requests, in our view.