Sam Marrazzo, senior application architect for Praxair, shares his insights on preparing applications for virtualized servers. On identifying applications for virtualization Disaster recovery and virtualization Other virtualized resources Changes to application environment and beyond As senior application architect for Praxair, a $5.6 billion industrial products supplier in Danbury, Conn., Sam Marrazzo has spent the past year evaluating applications – old and new – for use on virtualized servers. The differences between developing applications using virtual instances rather than physical servers can be astounding, Marrazzo says. For example, he has found that application build times can be up to 100 times faster with virtualization. “Crazy, but true,” he says. Here Marrazzo tells Signature Series Editor Beth Schultz what he’s learned as an application architect working within a virtualized environment, from how to test an application to how to plan for disaster recovery.Under what circumstances should an application not be put on a virtualized server?One of the first baselines we establish is how CPU-intensive an application is. If an application has a sustained CPU requirement for a long period of time, then it’s a physical server candidate. So if we see 80% CPU utilization for an hour, that would tell us that VMware [server virtualization software from EMC business unit VMware] as a whole wouldn’t be good for that application, that the application is going to require a dedicated CPU. [While Marrazzo evaluates applications for use on virtualized servers, the decision to move to a virtualized environment was made by Praxair’s infrastructure group.] When we think about virtualization, we talk about ‘blip’ applications – applications that see the CPU go up to 100%, then come down to 20% to 60% at full utilization. Those are good for a centralized computing environment where we can manage virtual instances. Are you setting a baseline for every application, or only select ones?Any application that comes in [for processing] today is tested. We monitor the CPU and then determine if we need to move it off or not. Excellent candidates are applications for print servers and terminal servers. Also new applications, like our job scheduler, are being brought into VMware. How does that job-scheduling application run in a virtualized environment and how has it benefited Praxair? We have been using the job scheduler [from Tidal Software] to run the whole ERP application [from J.D. Edwards]. We selected Tidal because we saw it could run in VMware, and that meant we didn’t have to buy new hardware, and because it fits into our disaster-recovery process very nicely – you definitely need a disaster-recovery solution for enterprise scheduling. We physically split Tidal on two separate VMware instances, [each of those running on servers at disparate data centers]. That gives us disaster recovery and isolation.In the past, we were decentralized from a scheduling perspective. We’d run a job on this system and one on that system, and we couldn’t have interdependencies within those jobs. All the jobs were scheduled in their environment without visibility to the outside systems. With Tidal, we get streamlined job scheduling, manageability and a centralized, enterprise view of all the production applications.You mentioned disaster recovery for applications. How does virtualization fit in?We have two separate data centers in the Northeast. We split the resources between them – the servers and the instances. That’s how we isolate the applications. If an application needs disaster recovery, we load balance via two servers and separate it that way. This has saved us in physical capital costs. In a traditional sense of distributed computing, you have to buy two of everything. Now we just buy two VMware servers and virtualize the instances for the disaster-recovery plans. Do you work with other virtualized IT resources?We have shared storage that’s virtualized, so we have redundancy there. We attached the storage to VMware, but use a separate [storage-area network]. We have direct connections to the storage, so all applications have access to that storage.What other changes have come from the ability to run enterprise applications in a virtual environment?Now there’s no need to buy a server for every project. We can manage these virtual instances via a console. And on the fly, if there’s a problem with the VMware session, we can allocate CPU or memory. Applying patches also is centralized now. At the same time, virtualization reduces our costs of physical devices. In most cases within an enterprise, 60% to 70% of application servers are underutilized. As far as staffing, requirements are basically the same. But in the past, we were building servers, and we couldn’t keep up with the demand. Now we’re building images, only using the amount of CPU or RAM required for that particular project. It’s not like we’re running around with hammers and screwdrivers and wrenches, but we’re still building big enterprise boxes for the VMware instances. It’s just that it’s one now vs. 30. And before, when we needed to do an evaluation of an application, we needed to have a server built. The first question was ‘Who’s going to pay for it?’ Today, we can evaluate applications very quickly, creating instances on our own, on the fly. That means we can deliver those applications much faster, and make the business units much happier. We’re looking at 80 to 100 times faster for application delivery without the server build time and ordering. I know it sounds crazy, but it’s true.How do you anticipate virtualization unfolding further at Praxair and beyond?At Praxair, we’ll be continuing down the consolidation path – you can’t consolidate everything in the first year. So we look at new apps and then legacy apps. Our goal is to further reduce the number of application servers in the data center.In the industry, what I’d like to see is developers coming up with guidelines for running applications on VMware. We’d like to see the larger companies like IBM and Citrix get VMware certifications. What we hear from vendors universally is ‘We use VMware in our testing and development environment.’ We don’t hear them saying – and we want to – that they use it in their production environments across the board. We want to see more vendors adopt this technology because it will continue being our direction going forward.We’re an early adopter, but so far, so good. Deployment and execution have been great. Related content news EU approves $1.3B in aid for cloud, edge computing New projects focus on areas including open source software to help connect edge services, and application interoperability. By Sascha Brodsky Dec 05, 2023 3 mins Technology Industry Technology Industry Technology Industry brandpost Sponsored by HPE Aruba Networking Bringing the data processing unit (DPU) revolution to your data center By Mark Berly, CTO Data Center Networking, HPE Aruba Networking Dec 04, 2023 4 mins Data Center feature 5 ways to boost server efficiency Right-sizing workloads, upgrading to newer servers, and managing power consumption can help enterprises reach their data center sustainability goals. By Maria Korolov Dec 04, 2023 9 mins Green IT Servers Data Center news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe