Data-center workloads are moving but not only to the cloud. Increasingly, they are shifting to colocation facilities as an alternative to privately owned data centers.\nWhat is colocation?\nA colocation facility or colo is a data center in which a business can rent space for servers and other computing hardware that they purchase but that the colo provider manages.\n\nThe colo company provides the building, cooling, power, bandwidth and physical security. Space is leased by the rack, cabinet, cage or room. Many colos started out as managed services and continue\u00a0 to offer those specialized services.\nSome prominent providers include Equinix, Digital Reality Trust, CenturyLink, and NTT Communications, and there are several Chinese providers that only serve the China market. Unlike the data centers of cloud vendors like Amazon and Microsoft, these colo facilities are generally in large metropolitan areas.\n\u201cColos have been around a long time, but their initial use case was Web servers,\u201d said Rick Villars, vice president of data centers and cloud research at IDC. \u201cWhat\u2019s changed now is the ratio of what\u2019s customer-facing is much greater than in 2000, [with the]\u00a0 expansion of companies needing to have more assets that are network-facing.\u201d\nAdvantages of colos: Cost, cloud interconnect\nHomegrown data centers are often sized incorrectly, with either too much capacity or too little, said Jim Poole, vice president of business development at Equinix. \u201cCustomers come to us all the time and say, \u2018Would you buy my data center? Because I only use 25 percent of it,\u2019\u201d he said.\nPoole said the average capital expenditure for a stand-alone enterprise data center that is not a part of the corporate campus is $9 million. Companies are increasingly realizing that it makes sense to buy the racks of hardware but place it in someone else\u2019s secure facility that handles the power and cooling. \u201cIt\u2019s the same argument for doing cloud computing but at the physical-infrastructure level,\u201d he said.\nMike Satter, vice president for OceanTech, a data-center-decommissioning service provider, says enterprises should absolutely outsource data-center construction or go the colo route. Just as there are contractors who specialize in building houses, there are experts who specialize in data-center design, he said.\nHe added that with many data-center closures there is subsequent consolidation. \u201cFor every decommissioning we do, that same company is adding to another environment somewhere else. With the new hardware out there now, the servers can do the same work in 20 racks as they did in 80 racks five years ago. That means a reduced footprint and energy cost,\u201d he said.\nOften these closures mean moving to a colo. OceanTech recently decommissioned a private data center for a major media outlet he declined to identify that involved shutting down a data center in New Jersey that held 70 racks of gear. The firm was going to move its apps to the cloud but ended up expanding to a colo facility in New York City.\nCloud is not a money saver\nSatter said he\u2019s had conversations with companies that planned to go to the cloud but changed their minds when they saw what it would cost if they later decided to move workloads out. Cloud providers can \u201ckill you with guidelines and costs\u201d because your data is in their infrastructure, and they can set fees that make it expensive to move it to another provider, he said. \u201cThe cloud not a money saver.\u201d\nThat can drive decisions to keep data in-house or in a colo in order to keep tighter possession of their data. \u201cEarly on, when people weren\u2019t hip to the game for how much it cost to move to the cloud, you had decision makers with influence say the cloud sounded good. Now they are realizing it costs a lot more dollars to do that vs. doing something on-prem, on your own,\u201d said Satter.\nGuy Churchward, CEO of Datera, developer of software designed storage platforms for enterprises, has noticed a new trend among CIOs making a cloud vs. private decision for apps based on the lifespan of the app.\n\u201cOrganizations don\u2019t know how much resource they need to throw at a task. The cloud makes more sense for [short-term apps],\u201d he said. For applications that will be used for five years or more, it makes more sense to place them in company-controlled facilities, he said. That's because with three-to-five-year hardware-refresh cycles, the hardware lasts the entire lifespan of the app, and the hardware and app can be retired at the same time.\nAlso driving the decision of private data center vs. the cloud is machine learning. Churchward said that\u2019s because machine learning is often done using large amounts of highly sensitive data, so customers wanted data kept securely in house. They also wanted a low-latency loop between their ML apps and the data lake from which they draw.\nLink to mulitple cloud providers\nAnother allure of colocation providers is that they can act as a pipeline between enterprises and multiple cloud providers. So rather than directly connecting to AWS, Azure, etc., businesses can connect to a colo, and that colo acts like a giant switch, connecting them to cloud providers through dedicated, high-speed networks.\nVillars notes the typical corporate data center is either inside corporate HQ or someplace remote, like South Dakota where land was cheap. But the trade-off is that network connectivity to remote locations is often slower and more expensive.\nThat\u2019s where a data-center colo providers with large footprints come in, since they have points of presence in major cities. No one would fault a New York City-based firm for putting its data center in upstate New York or even further away. But when Equinix, DTR, and others all have data centers right in New York City, customers might get faster and sometimes cheaper connections plus lower latency.\nSteve Cretney, vice president and chief information and technology officer SC Data Center Inc., an affiliate of Colony Brands Inc., is in the midst of migrating the company to the cloud and moving everything he can from his data center to AWS. Rather than connect directly to AWS, Colony\u2019s Wisconsin headquarters is connected to an Equinix data center in Chicago.\nGoing with Equinix provides more and cheaper bandwidth to the cloud than buying direct connectivity on his own. \u201cI effectively moved my data center into Chicago. Now I can compete with a better price on data communication and networks,\u201d he said.\nCretney estimates that by moving Colony\u2019s networking from a smaller, local provider to Chicago, the company is seeing an annual cost savings of 50 percent for network connectivity that includes telecommunications.\nAlso, Colony wants to adopt a mult-cloud-provider strategy to avoid vendor lock-in, and he gets that by using Equinix as his network connection. As the company eventually uses Microsoft Azure and Google Cloud and other providers, Equinix can provide flexible and economic interconnections, he said.\nCut the enterprise data-center footprint\nIn 2014, 80 percent of data-centers were owned by enterprises, while colos and the early cloud accounted for 20 percent, said Villars. Today that\u2019s a 50-50 split, and by 2022-2023, IDC projects service providers will own 70 percent of the large-data-center space.\nFor the past five years, the amount of new data-center construction by enterprises has been falling steadily at\u00a0 5 to 10 percent per year, said Villars. \u201cThey are not building new ones because they are coming to the realization that being an expert at data-center construction is not something a company has.\u201d\nEnterprises across many sectors are looking at their data-center environment and leveraging things like virtual machines and SSD, thereby compressing the size of their data centers and getting more work done within smaller physical footprints. \u201cSo at some point they ask if they are spending appropriately for this space. That\u2019s when they look at colo,\u201d said Villars.