Feb 2, 2017 4:00 AM PT

Cost optimization gains ground in IT infrastructure decisions

Companies need to continually seek ways to optimize costs. Now more than ever, they need to be nimble, efficient and smart.

Thinkstock

In business, as in life, a great deal of time is spent predicting the future, especially at the dawn of a new year. Market watchers are scrambling to identify the top IT trends that will shape buying patterns in 2017. 

Amid all the data gathering and crystal-ball gazing, I prefer to look back and learn from what’s happened in the past so I’ll be better prepared to handle what lies ahead. 

Last month, I attended the Gartner Data Center, Infrastructure & Operations Management Conference, which never fails to deliver an insider’s look at the latest priorities, challenges and transformations in the corporate data center. This past conference provided lots of valuable insight, especially when it came to the topic of cost optimization. 

+ Also on Network World: Creating a new network value equation +

A hot topic last year, I hazard a guess it’s still going to be a front-and-center concern when I speak with current and potential customers in the months ahead. Gartner devoted an entire track at the conference to the subject, and I shared my thoughts on how companies can stay on budget while still innovating during a session attended by nearly 300 IT professionals and decision makers. 

My strategy resonated with the packed room: Lengthen the lifecycle of existing hardware while continuing to purchase best-of-breed equipment and maintaining current and previous generation gear. After all, who isn’t taking a hard look at IT spending to see where to defer or lower capital expenditures? 

Everyone is trying to find ways to shift large spends from an event that takes place annually or every few years to a more predictable event that you plan for every five, seven or 10 years. Perhaps the best way for me to illustrate this point is to go back more than a decade to when the U.S. economy boomed just before the big housing market bust. 

Look to the past for insight into the future

Looking back to 2006, I can offer insight into the technologic and socioeconomic changes that impacted two different strategies for building and supporting a corporate network. In this time capsule, let’s say your network is at a transportation company with 5,000 employees, $1 billion in revenue and an aggressive expansion plan to keep pace with rapid growth. 

Ten years ago, the advent of 64-bit computing took the world by storm as software for the new CPUs became mainstream. This represented a quantum leap in performance, so stepping up to 64-bit CPUs was a big priority. Voice over IP also gained ground, while prices for 10 gigabit networking came down. Against this backdrop, OEMs likely encouraged you to make across-the-board infrastructure upgrades even if this delivered only modest incremental improvements. 

If you took the immediate upgrade path, you would have made expensive and unnecessary network and server upgrades during a time when your budget should have been focused on VoIP and 10Gb. And, if you went with all-around upgrades, the price tag likely would have been more than $2.3 million in capex. 

An alternative approach would have been to keep legacy, yet fully functional, network switches and existing gear that was VoIP-capable while investing in server and some 10Gb upgrades to support the most critical systems. In that scenario, the spend would have been less than $1 million—about 60 percent less. 

Making wise purchase and upgrade decisions became even more crucial as we look at 2008 to 2011, which was characterized by a major recession and housing market crash. That same timeframe also saw the arrival of solid-state storage and the replacement of T1 networking with Ethernet. 

Like before, you could have upgraded everything or taken a much more selective and cost-effective path. The most bang for your buck would have been achieved by upgrading mission-critical, core systems first while deferring upgrades to less-critical systems. This approach potentially would have saved more than $1.8 million, which could have been a lifesaver in the middle of a major economic downturn. 

+ Also on Network World: 80/20 rule of network equipment: Stay on budget and innovate +

By 2011, differences in the OEM vs. alternate path became more obvious. While spending on game-changers such as wireless and the cloud were no-brainers, across-the-board upgrades to 100 Gb were overkill. Stepping up to 40 Gb was sufficient for most bandwidth-intensive organizations. 

Over the past five years, the constant need for cost optimization has fueled growing interest and demand for independent support providers, especially for post-warranty and end-of-service-life for data center and network devices. Earlier this year, Gartner’s competitive market landscape shed more light on how third-party maintenance providers are delivering an average of 60 percent savings off OEM support list prices. 

I’m not sure what to expect in the uncertain times ahead. But I am certain of one thing: Companies need to continually seek ways to optimize costs. It’s not just about the money. Now more than ever, you need to be nimble, efficient and smart. So, economize where you can, look to the past for answers, and apply those lessons to achieve a better future.