- 18 Hot IT Certifications for 2014
- CIOs Opting for IT Contractors Over Hiring Full-Time Staff
- 12 Best Free iOS 7 Holiday Shopping Apps
- For CMOs Big Data Can Lead to Big Profits
Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
If you’ve ever built something yourself rather than buy it, like a book shelf or a bird house, you know the satisfaction of making something exactly the way you want it. But when something you’ve made breaks down, you have to fix it yourself. And while repairing a bookshelf is one thing, recovering applications in a data center when they fail is something else entirely.
Linux is an excellent tool for creating the IT environment you want in the data center. Its flexibility and open-source architecture mean you can use it to support nearly any need while keeping costs low. But if something does go wrong, it’s up to you to ensure your business operations can continue without disruption. And while many disaster recovery solutions focus on recovering data in case of an outage, having the information will be useless if the applications that use it don’t function and you are unable to meet SLAs.
Businesses that value the independence Linux provides can benefit from partnering with a technology provider that can keep their business running in the event of disaster. And as we have seen all too frequently in the last several years, disasters happen to organizations of all sizes, from natural disasters to large-scale hacks that take down servers company-wide. It seems that every week we hear in the news about another large company that is experiencing a significant service failure.
As you consider what to look for in solutions to keep your Linux-heavy data center up and running, consider the following criteria:
• Speed of failure detection and recovery: Every minute counts when it comes to business downtime. The first step to effective recovery is rapid detection of failure. Even the best recovery solution will be insufficient if the detection process itself takes minutes rather than seconds. The ideal tool should provide fast detection with minimal resource usage, to meet recovery time objectives.
• Failover that covers the entire range of business services: Business-critical applications may require the preservation of several different layers of the information stack to perform complementary processes, such as the Web functionality, the application itself and the databases feeding it information. High availability can be a challenge when complex recovery is needed. Be sure your backup and recovery solutions can handle the interconnected processes necessary to maintain business operations.
• Advanced failover: Keeping a standby server ready for every server you use can be costly. Look for more advanced failover capabilities that allow you to maintain redundancy with one server that can take over for any that fail.
• Failover testing capability: You shouldn’t wait until you have a disaster to learn how well your recovery solutions work. Look for tools that include the ability to test failover, to assess performance without affecting normal operations.