After trying its own data center, Zynga retreats to the cloud

What can we take away from Zynga's brief stint trying to run its own data center?

Zynga cloud data center
Credit: Reuters

In a surprising move, game maker Zynga tried to do its own data center thing, then went back to the cloud, reports the Wall Street Journal.

Was it a game of chicken, to see who would bleed first between Zynga and Amazon? Or was it a venture where, as the WSJ cites, you discover that your groove doesn't involve adding expertise in a field where lowering your costs may not pay off? Sorry to sound cryptic. Let me explain.

The capex of running a data center can be gruesome. Even with way-cool software-defined routing, eco-cooling, and plentiful cheap connectivity, they're still expensive. The payback is going to come in decades, one can only hope. In the old days, organizations would install a bunker of a data center, often deep in the sub-basements of a building, designing all for the long term, and sinking cooling and initial infrastructure costs that would include some wild-haired expansion factor over the perceived life of the building.

After the concrete pour, the HVAC would roll in. Power densities in the design equation would be mind-boggling, massive transformers, perhaps two grid connects, lots of halon, and plans for load-bearing floors with massive densities. In fact, they were brittle, and no one knew what might happen in the future. If you designed using the pragmas of 1995, 20 years later, everything is different.

We no longer use CSU/DSUs. It's fiber. A rack can use upwards of 20KW with a .95PF. Why? That honking blade server can burn lots of joules because it has hundreds of hot little cores. But there's no disk.


The disk moved to SAN arrays nearby. Virtualization has disassociated compute from storage from networking from archiving from control planes. Everything has a unified control plane talking to another unified control plane.

Therein lies another expense: multi-disciplined opex personnel, the systems engineers, the systems engineers who are no longer are simply network engineers, and so forth. They cost money and their expertise needs to be almost constantly updated. The skills that were top-gun in systems in 2005 are now left wanting in terms of today's data center. Depending on many factors, $75k-150K per annum (then add 1.5 in benefits, training, and other non-direct wage expenses) is a minimum. Make a 24/7 team comprising three shifts, and the data center opex costs go crazy.

It's no wonder, therefore, that co-op data infrastructure costs that benefit cloud costing models make sense. Here's where the different flavors of cloud, be they Amazon, Azure, Rackspace, Helion and another dozen varieties, start to make capex and opex sense.

Do what makes sense for your organization. Running your own data centers has merit. Eventually, you re-invent the wheel. There is no longer a fortress of data that you own and can point your finger at.

We've gone and virtualized the profit-making entity. Sometimes it's better to bank your assets elsewhere, instead of under the building.

Must read: Hidden Cause of Slow Internet and how to fix it
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies