Best practices for the new IT

Early adopters give tips on how to pick - and cost-justify - New Data Center technologies.

1 2 3 4 5 Page 3
Page 3 of 5

When you put a new application release into production, and it brings your servers down despite prerollout testing, you know your best practices are begging for an overhaul. Such was the case at competitive-game provider WorldWinner, in Newton, Mass. Joe Bai, CIO and vice president of technology, describes the problems that prompted him to begin rethinking IT best practices and investigating next-generation change-management tools.


Four practitioners offer tips

"I was here less than three weeks when we put a release out, and it didn't work. It wasn't that the new functionality wasn't appropriate or wasn't performing the way we expected. The Web servers didn't come back," Bai says.

It turned out the version of Apache running in the development and quality-assurance environments wasn't the same as the one for the production environment. "The new code base that went out was dependent on code and configuration parameters that weren't there."

The team had to roll back the release and find the discrepancies. "It probably cost us a quarter of a day's revenue," Bai recalls of the 2003 event.

The bigger problem was that such issues weren't unusual for WorldWinner at the time. "We had a number of releases that went out and required eight, 10 or 12 patches before we were happy enough with them to leave them up. That's just not the way we wanted to do things," Bai says.

Over the last two years, Bai transformed the IT department from fire-fighting architects, engineers and developers to a lean, agile group that keeps the site up, stocked with fresh features and anticipates application enhancements before marketing staff comes asking for them.

No single project fixed WorldWinner's problems. Rather, Bai launched multiple best-practice efforts aimed at implementing better change management, stronger version control and other improvements related to software releases.

One tool in Bai's arsenal is mValent's Integrity suite, which automates application-configuration management. The mValent technology helps developers recognize configuration-related inconsistencies and automatically makes changes to the underlying application infrastructure. "A lot of things had to come together, but they're all based on really knowing the environments and getting good, instrumented and measured software out the first time," Bai says.

He advises others who want to shore up application processes used in New Data Center (NDC) architectures to think long term. "Don't try to solve an entire problem at once. Look at it on an ongoing basis, and don't assume you ever have it solved," Bai says.

That's advice to remember as you evaluate the latest technologies aimed at gleaning greater efficiencies from existing IT resources. Virtualization can bolster server and storage-use rates and reduce administration, vendors say. If consolidation is the objective, blade servers offer space-saving, power-conserving features. A services-oriented architecture (SOA) promises easily combined, modular software components, while application- and systems-management experts propose tools to streamline and automate manual tasks that bog down corporate processes.

Early adopters who have deployed such NDC technologies learned lessons about what works and what doesn't. Their tips often suggest new ways of doing IT.

Choosing wisely

Before making a commitment, weigh the long-term viability of any new technology, says Cliff Dutton, who is the former CTO at Ibis Consulting. (Dutton recently joined Dynamic Communication, a management consultancy.) People tend to focus on the size of a vendor when considering an IT purchase, but size isn't the only determinant of a product's long-term success.

"There are new technology offerings from large vendors that have the same characteristics as new technologies from smaller vendors - they're not well deployed yet, they're not necessarily going to be supported in the long run," Dutton says. "A name-brand large supplier can terminate a product line just as easily as a small company can go out of business."

Ibis, which provides electronic data discovery services, deployed in the fall of 2004 Acopia Networks' storage virtualization switches. These new-style devices attach to network-attached storage (NAS) appliances and virtualize the files residing on them (see related story).

When Dutton first talked about plans to virtualize his company's 200TB storage environment, people's reactions made him think he'd taken a crazy risk on a young technology.

But Dutton had clear expectations when he chose Acopia. The switches let Dutton create a single file system across multiple devices, so storage administrators at the Providence, R.I., company can reallocate shares and balance the workload across multiple NAS boxes without disrupting users' access to data.

If Ibis can process more data more efficiently using existing capacity and staff resources, then the company's bottom line grows. "Anything that improves our ability to administer the storage environment has impact on the business," Dutton says.

To reduce the risk of project failure, IT buyers and vendors need to be on the same page. "People need to be very clear about their expectations technically of what a new vendor in their shop is intended to do," Dutton says. "You need to write it down, and you need to get explicit commitment from the vendor to support the achievement of those requirements."

Users also need to understand that not all devices are created equal. Take blade servers, some of which are diskless and some that aren't. Albridge Solutions chose the former option, from Egenera, to consolidate and virtualize its server environment.

Egenera's blade servers consist of only processors and memory, while other blade servers have internal hard drives and boot internally, says Rao Pallepati, vice president of IS and security at Albridge in Lawrenceville, N.J., which offers customer data-management software for financial institutions. "If you look at other blade servers, they're only saving space and power; they're not really doing much virtualization," Pallepati says.

When it comes to new technologies, "healthy skepticism is good," says Tony Plasil, principal and head of investment technology at STW Fixed Income Management in Carpinteria, Calif.

The specialty bond-management firm is an early adopter of Corticon Technologies' software to manage business rules. STW uses Corticon's rules engine to make sure investment transactions don't violate any account guidelines, such as a customer's limits on holdings in a certain industry. STW integrated the rules engine directly with its trading application so that violations can be detected in real time, before a trade is executed.

Rules engines are generating a lot of buzz, but enterprise IT executives need to be aware of their limitations, Plasil says. "Don't get fooled by the templates, the GUIs. If a vendor shows you how easy it is, be skeptical."

In particular, if a vendor starts referring to alternative methods of defining rules, then listen carefully. "When it starts talking about being able to drop down into some kind of code, be very watchful," Plasil says. "That means you're probably going to be writing a lot of your rules in code, and they aren't going to be supported by the application."

Plasil may have sacrificed some ease of use with Corticon's technology, but he's not limited in the rules he can define. That's just the way Plasil wants it, and he never intended to relegate rule-making tasks outside of IT anyway. "It's much better for our firm not to have any gaps and to have this controlled by a senior business analyst and not have a whole bunch of people able to put rules in."

Help from inside

Part of WorldWinner's application overhaul involved new technologies, such as mValent's Integrity, but personnel and process changes also have made a big impact, Bai says.

One of the lead engineers at WorldWinner recently started a lunch series where people talk about what they're working on - what they think is cool, what they need help with, and the impact of changes to third-party development tools.

It's turned out to be a great venue for swapping ideas and encouraging code reuse, Bai says. The meetings' informality is crucial.

"We're too small to be formal. I've found that these lunch sessions are infinitely more efficient than trying to convince a developer that he needs to document possible use cases for his code, or put something in some documentation store that everyone else is supposed to check," Bai says. "It just doesn't work. But if they chat up a new feature over lunch, it even works better than talking about it with a product manager."

When an NDC project is focused on optimizing business processes and operations, it's particularly important for IT staff, business analysts and users to set the project agenda together, says Robert Salazar, vice president of process management at First Horizon in Irving, Texas. First Horizon selected a business process management (BPM) suite from Fuego, and uses the tools and Fuego's methodology as part of a broad effort to automate, manage and optimize mortgage loan operations.

Viewing a BPM initiative as purely an IT project is short-sighted, Salazar says. By insisting on collaboration throughout the design and development phases, First Horizon had few surprises or forgotten requirements when it completed its first BPM project, he says. "I saw the line-of-business people taking ownership of project delivery, and when we did hit those couple of inevitable bumps along the way, they would resolve them. They were as much interested in the project being successful as we were."

Of course, no matter how prepared IT is, some surprises still crop up after a rollout.

Desert Schools Federal Credit Union in Phoenix uses server virtualization technology from EMC company VMware to cut hardware costs and speed server deployments. The IT department first tried out VMware's Workstation product internally to create a test environment for development projects.

Later, it deployed VMware's server products in the IT lab before extending the technology to the company's production application environment.

While IT staffers had familiarity with how the technology works, they learned even more when the rollout advanced outside the lab, says Doug Baer, systems engineer at Desert Schools. Baer's advice to other companies considering a virtual server environment is to be mindful of what an application is doing. Not every application is a good candidate for a virtual machine. "SQL is notoriously difficult, because it hits the disk a lot, and disk virtualization is expensive," he says.

In addition, be prepared for resistance from some application vendors. Desert Schools looks first to run each new,br>

application on a virtual machine, but some projects can't be run on a virtual machine because of the vendor's support requirements, Baer says

"The real trick is getting vendors to support their software running in a virtual machine," he says. "It's been more of a problem than I would have anticipated it being."

VMware has a process in place for dealing with reluctant independent software vendors, and that's been helpful, Baer says. Over time, as the technology becomes mainstream, Baer hopes the need for such intervention will disappear. "Being near the bleeding edge, that's kind of what you run into."

When you've identified applications that are a good fit for a virtual machine, make sure the infrastructure is ready, he adds. "Take the time to design the virtual infrastructure to be as redundant as possible," Baer says. "Go for servers that have lots of RAM, for one thing. Also, have redundant connection to the [storage-area network], redundant power supplies and redundant network connections."

New systems, new roles

No new technology operates as an island: For the most part, integration is unavoidable.

STW's Plasil recommends that companies considering deploying a rules engine dig into the details of how an engine can be linked to existing systems before making a purchase. Corticon's technology lets STW incorporate the rules engine into existing business applications as a Web service, for example.

Consider, too, how any new technology fits into the bigger management picture, Dutton says. For example, Dutton has worked to create an integrated performance-monitoring framework at Ibis, including software from Mercury Interactive that lets IT staff view a broad picture of data-center conditions and spot potential problems.

Also make sure that any new gear added to the NDC architecture is compatible with the existing framework, Dutton says. "If you leave islands of functionality that are not under the umbrella of performance-management monitoring, then you're going to have holes in your visibility."

If a company is building an SOA, tools for managing, securing and monitoring services are important, says Tyrone Page, senior software architect at JetBlue Airways in New York. "If you have many services supporting thousands or even millions of requests, you need to be able, at a glance, to see what is going on with those services," says Page, who uses SOA Software's Service Manager to secure and manage its Web services architecture. "You need to be able to see if the service is up, how many unauthorized requests are coming in and where they are coming from, and you want to be able to throttle and redirect traffic based on [service-level agreements]."

The need for governance, in particular, shouldn't be underestimated. The hardest part about moving to an SOA is governance, Page says. "When services are built and consumed at the enterprise level, some of the issues which need to be addressed are: Who owns the service? Who can have access to the service? Who is responsible for maintaining the service? Who pays to maintain the service?"

As a company shifts to an SOA model, job roles also may need to change, Page adds. "Developers will need to begin to think differently about how things are built. Right now many think a service is just taking an old application, placing a service facade on it, and calling it a service. A service needs to be thought through end to end," including proper security and version control, he says.

"Some roles will change, others will just take on responsibility as more and more services come online. For example, service security and governance could become full-time positions," Page says.

Likewise, using BPM tools and methodologies requires a different, broader way of thinking than some developers and business systems analysts are accustomed to, Salazar says.

"You have developers who tend to want to be very heads-down, focused on snippets of code. And you have business systems analysts whose analysis is always within the context of the constraints of the system that they built," Salazar says. "In order to do this work, you have to break out of that."

One way to help along the training process is to use the expertise of the vendor. During First Horizon's first two BPM projects, an architect and developer from Fuego worked with internal staff to ensure the design and process decisions made went along with the best practices methodology Fuego espouses.

Having access to them made the knowledge-transfer process more significant, Salazar says. "I wouldn't expect as an organization for us to know what to do and not to do the very first time we tried."

Next: Security expert Rhonda MacLean states her case

-->

Learn more about this topic

SOA is the future of software

02/13/06

Voyence adds virtual design tools to upgraded management software

02/13/06

The convergence of business process management and business service management

02/09/05

NetPro offers protection for Active Directory

01/16/06

MValent automates app server configurations

07/27/04

1 2 3 4 5 Page 3
Page 3 of 5
SD-WAN buyers guide: Key questions to ask vendors (and yourself)