You thought it was buried. You forgot. Someone didn’t document it. A ping sweep didn’t find it. It lay there, dead. No one found it. But there was a pulse:It’s still running, and it’s alive. And it’s probably unpatched.
Something probed it long ago. Found port 443 open. Jacked it like a Porsche 911 on on Sunset Boulevard on a rainy Saturday night. How did it get jacked? Let me count the ways.
Now it’s a zombie living inside your asset realm.
It doesn’t matter that it’s part of your power bill. It’s slowly eating your lunch.
It doesn’t matter that you can’t find it because it’s finding you.
It’s listening quietly to your traffic, looking for the easy, unencrypted stuff. It probably has a few decent passwords to your router core. That NAS share using MSChapV2? Yeah, that was easy to digest. Too bad the password is the same as the one for every NAS at every branch from the same vendor. Too bad the NAS devices don’t encrypt traffic.
And the certificates on those Wi-Fi routers you so expensively installed back in 2009? Do you realize how their certificates were composed? Did you look inside even one of them to discover that all of the certs are the same—none is unique—and all were encrypted with an abacus? Zombies understand an abacus.
Wait, you say someone plugged in a wall wart server, or perhaps a kewl Raspberry-flavored PoE made its way into your cabling system, left by—well, we don’t know exactly who did that.
Zombie servers are there. They’re alive.
And so …
… Shut up about updates
In the ExtremeLabs facility and remote NOC at Expedient, I have a lot of machines and plentifully more VMs and containers. They get automatic upgrades, save VMs that were used for tests. Those get frozen in time, put into the deep freeze of an old Compellent (now Dell) SAN, then deleted after a year. Goodbye.
The vast majority of updates, vendor-sent patches and fixes, and even driver updates are performed pending reboots (looking at you, Microsoft).
There was a day not long ago, when it was a best practice to ignore automatic updates because updates weren’t well-vetted by vendors. Lack of regression testing, impossible-to-test-variances problems and “oh, you did that?” mysteries meant explosions were common. This lead to organizations making applications infrastructure-generic, by the book and without the use of third-party products that could introduce errors.
It’s rough to impossible to do that today. Like it or not, it’s a heterogeneous world. You can no longer carefully build walls, even operating systems instances around critical infrastructure (what isn’t critical business infrastructure today?) including hypervisors, sandboxes, containers, unikernels and other walls so that systems failures don’t crater line-of-business apps.
What do you need to do?
- Actually walk around your infrastructure and inspect it, looking for, yes, zombie hardware and untagged critical assets.
- Open up every single hypervised, containerized (e.g. virtualized) host in your entire domain (cloud included), and find out the exact purpose of each and every instance running. And if each host is getting updates, find out what its patch level truly is.
- Write down the result as an audit step.
- Revisit each of these quarterly.All of the intruder protection and detection software on the planet allows some degree of normalization. Turn off normalization for a week—a week when no one is on vacation. Listen to the traffic. Revalidate detection/inspection rules. It’s OK to automate this process. Just do it.
At the end of the day, you have The List. Consolidate it. Examine it. Get another pair of eyes (or more) on the list. ACT ON IT. Lock up the list after acting on what you find. Then do it again.
There are zombie bots waiting for you to slip up.