- The 20 Best iPhone/iPad Games of 2013 So Far
- 9 Steps to Build Your Personal Brand (and Your Career)
- 7 Consumer Technologies Coming to an Enterprise Near You
- 11 Signs Your IT Project is Doomed
Network World - Given that cloud computing is still emerging, it shouldn't come as a surprise that opinions vary widely on the best way to architect the storage. In fact, it seems likely that there is no such panacea -- different types of private cloud almost always require different approaches.
Or do they?
In a recent interview, Piston Cloud CEO Joshua McKenty asserted flatly that, when it comes to private clouds, the best approach is to integrate the storage with the servers, a setup that offers performance far beyond that of more traditional approaches.
"The right model is a two U server with a bunch of JBOD," McKenty says. "Because you're limited not by the rate of any one drive, but by the number of spindles."
It works, according to McKenty, because direct-attached storage in the servers doesn't have to get routed through a single gatekeeper like a filer or even a SAN switch. Instead of "20 servers all talking to this one filer at a total bandwidth of 10G," he said, each server has its own 10G port, meaning you have 200G worth of bandwidth. Even if the individual disks are slower and you need to store multiple copies of files, total performance far exceeds what you can achieve with a more traditional setup.
This idea is "heresy" as far as many IT departments are concerned, McKenty said, because they want to use NAS or SAN to handle their storage workloads instead of integrating storage and compute on every node. He described how a NASA operations team tried to replace one of his integrated systems.
"They brought in a $20,000 filer and put it in the bottom of the rack. It had Fibre Channel. And they put in a Brocade switch. And they cabled everything up with Fibre Channel and FCoE. And they spent four days of down time tweaking this thing, and they got 20% of the performance that we'd been getting out of our configuration," he asserted.
But is this integrated JBOD approach just for private clouds that have special NASA-like workloads?
Absolutely not, McKenty said in a follow-up conversation. "We've seen customers use the same architecture for VDI workloads, Web hosting, risk analysis [and] other compute-heavy things like financial simulations or Monte Carlo simulations."
The fact that storage density and processor speed are growing far more quickly than network capacity -- an idea McKenty described as the mobility gap -- is a key reason why the distributed model works so well, he says.
"The speed of light trumps your infrastructure. The latency of moving your data is never going to keep up with the growth of how much data we're saving and how much processing we want to do on it," he says. "And yet, if you look at a SAN or NAS architecture, you're always putting all of your storage essentially in one device on the far end of a single wire."
Needless to say, not everyone agrees with McKenty's characterization of the issues involved.
"What he's talking about doing is sort of like stepping back in time about 10 or 15 years," says Brocade product marketing manager Scott Shimomura. "He's making the argument that the direct-attached storage approach is the best way to ensure performance, and that just isn't the way customers think about their storage today."