Americas

  • United States

Securing servers

Opinion
May 03, 20044 mins
Computers and PeripheralsData CenterSecurity

Securing our servers is our IT organization’s biggest concern. With new regulations concerning the protection of data and new viruses popping up almost daily, what can I do to ensure that our servers are up-to-date with the latest patches and service packs?

Editor’s Note: This is the first of our new Server Sleuth columns. Each week, the Sleuths will help you crack your tough server-management questions – which you can send them at sleuths@nwfusion.com. See the Server Sleuths bio page for more on our crack detectives.

Q. Securing our servers is our IT organization’s biggest concern. With new regulations concerning the protection of data and new viruses popping up almost daily, what can I do to ensure that our servers are up-to-date with the latest patches and service packs? 

Also, can you address the speed vs. security question? Meaning, once I get a patch, to ensure it does not bring down the network, I have to check it to make sure there are no conflicts. If there are, I have to manually make changes to the patch, which drastically increases the amount of time it takes to deploy the individual patch; sometimes by as much as a month. Is there anything I can do to speed up the process to ensure my network is as safe as possible?

I’ve worked with teams who told me that before they solved it, patch management was a “can’t see the forest for the trees” problem.  They spent time and energy tackling each patch or update as a single event, working on a single layer of the configuration, like a single tree.  The problem is that sometimes their solutions affected other aspects of their networking “forest.” One tree leans a bit, and others can fall.  Without accounting for the entire environment, patch management can result in instability as patches and updates impact other elements and create disruptions.

The whole software stack must work together, and continue to do so even after reprovisioning or changes.  So patching one aspect of one layer at a time is slow and risky, and frankly not working in the real world. I believe that automation is key to risk avoidance as well as speed.

The fundamental idea is that you need to manage patches holistically.  In other words, you need to look at the full configuration, the entire stack – operating system, applications, content, settings, etc.- to do patch management effectively (getting the right patch to the right system), efficiently (quickly), and risk-free (no disruptions because of unforeseen conflicts when patches and updates are applied across the whole environment).

A comprehensive configuration database – one that has all of the configuration information, across the entire software stack – can provide visibility. This enables a thorough analysis of the potential impact of any change in advance of the change.  By identifying problems before they occur, you can focus your time just on the patches and areas that need adjustment up front.  You’ll avoid the domino-effect of problems across your environment that disrupt service and take enormous IT time and resources to solve. 

The potential impact of any change is not just technical, of course. Also vital is looking at the impact of any changes on your business logic. A policy-based patch management solution can help manage risk in this way.  For example, if you know that any customer-facing systems must remain stable, then you can establish policies to analyze changes and introduce them in the corporate intranet before the extranet.

There’s another best practice to ensure stability.  No matter what we do, the fact is that “things happen” in IT environments and across user populations.  People change settings on machines and bring new applications online, and infections and disruptions can result.   Automating ongoing verification of correct software configurations across the environment, based on established policy and models, can ensure that patches and updates stay applied as intended.

Fitzgerald is CTO and Director of Product Development of HP Change & Configuration Software Operations (formerly Novadigm, Inc.). Along with Albion Fitzgerald, he co-authored the patent underlying its adaptive management technology. Before joining Novadigm, he served as director of product development at Pansophic Systems and its predecessor company ASI/Telemetrix, where he engineered large-scale, high performance networking software that is used in mission-critical applications by Fortune 500 companies worldwide. He has a solid background in systems management, specifically in the areas of security, networking and operating systems.

Fitzgerald is one of the founders of HP Orchestration, formerly Novadigm, and, with Albion Fitzgerald, co-authored the patent underlying its adaptive management technology. Before joining Novadigm, he served as director of product development at Pansophic Systems and its predecessor company ASI/Telemetrix, where he engineered large-scale, high performance networking software that is used in mission-critical applications by Fortune 500 companies worldwide. He has a solid background in systems management, specifically in the areas of security, networking and operating systems.

More from this author