The enterprise side of the data center includes a mainframe that supports two major systems, the main Medicaid system for the state and the university's student information system, which includes financial aid and registration. "We're on the front end of a transition to a new Medicaid system based on MITA (the Medical Information Technology Architecture) and a student information system replacement project, so the mainframe will be gone in about five years," CIO Bottum says. The new systems will be based on redundant commodity hardware and virtual machines.
The rest of the enterprise infrastructure -- some 700 x86 boxes, mostly Dell and Sun with a little bit of IBM mixed in -- supports about 155 applications, including everything from email and payroll to the school's Blackboard course management system. Most of the machines are running Linux but there is a modest amount of specific-purpose Windows and some Unix. "Our direction is to move toward Linux," Pepin says.
Enterprise computing row (Photo by Zac Wilson)
"This is where we're looking at doing some cloudy things in the Joni Mitchell model," he says. "It will be more of what you traditionally think of as a cloud because we probably will go down the virtualization path for a large portion of it."
Clemson has more than 200 systems virtualized today, mostly to support smaller applications. "We're virtualized where it makes sense," Wilson says. "One of the problems with virtualization is, once you go down a path you're kind of stuck."
BACKGROUND: Start your virtualization research here
The team hopes to avoid that elephant trap by using Dell's Advanced Infrastructure Manager (AIM), which Wilson describes as an abstraction layer between the hardware and the services supported.
"AIM lets you manage the hardware behind VMware, and manage the VMs on top of VMware as well, so you have this view of your whole enterprise and you can mix and match resources," Wilson says.
One of the primary benefits: the ability to move applications between virtual and hardware-based environments, regardless of which virtualization tools are used. "If we need three more Blackboard instances we can spin that up on hardware," Wilson says, "and when things slow down, with a single reboot, shift those to virtual machines and use the hardware for something else. This is a really good product to manage your whole infrastructure and it gives you an exit strategy if you want to switch virtualization vendors."
AIM also represents Clemson's first serious dip into iSCSI. With AIM, the school can boot a host from a remote instance over an iSCSI link, then move that machine around virtualization platforms. "AIM solves all the driver problems," Wilson says. "If an instance crashes you can restart, or try to boot it on another box based on policy. Hands free."
Mike Cannon, data storage architect, says Clemson just brought in two iSCSI arrays and two new QLogic 9200 Fibre Channel switches to grow out the university's Fibre Channel network to 1,024 ports. The Fibre Channel network is split into two fabrics (with diverse paths) and spans both Clemson data centers.
"The storage network really needs to converge at some point," he says, "but we're not ready yet. Today we have a Fibre Channel network, we have our enterprise Ethernet network and we have the Myrinet network, which ties all of the high-performance computing nodes together. We also have a little bit of Infiniband for testing."
Cannon says Hitachi storage systems are becoming the basic infrastructure the school is using on the enterprise side for both directly attached and VM cloud-type environments.
Mission critical resources are supported by Hitachi HDS AMS-2100 series arrays, Cannon says. "Prior to that we were using a product from another vendor that required considerable time to figure out how to properly lay out the array and segment sizes. And once we delivered that to the application, if we found out we made a mistake it was real complicated to go back and retrofit another array and move the data. Now we use Hitachi Dynamic Provisioning. Hitachi configures those for us when they deliver the array and if we need more I/O, we can much more easily add spindles. We weren't able to do that with our former vendor."
Long term, does the enterprise side of the house end up as one big Joni Mitchell cloud? "I think you'd have to end up there," Wilson says. "There will be pockets that aren't, but as you abstract your computing layer from the personas that run on it you can dynamically allocate hardware for various things. It gives you that flexibility. Virtualization is just a component of this."
Changing finance mix
One of the ways that Bottum and his team are funding all of these initiatives is through grants. Five years ago "the grant money didn't really exist," Bottum says. "And we're running about $5.5 million this year."
The majority of the grants are for specific faculty. Wilson and Ligon, for example, have grants for parallel virtual file system work. "It's usually almost a 50-50 split between what goes to the faculty and their departments and what goes into IT's account, so it's a nice healthy IT/faculty partnership," Bottum says.
The goal, of course, is to cover as many costs as possible. "Recognizing that Clemson is a public institution and the future of state funding is not clear, we are encouraged to become entrepreneurial. So the goal is to bring in funding in a way that doesn't detract from what we are doing for Clemson."
But there is only so far that you can take this model, Bottum says. "We're at a point now where I don't think any of this happens without a public/private partnership, where we really poke holes in our respective walls and reach inside the other one and start to maximize ways we collaborate. The private schools had to re-engineer themselves in the '90s and into the 2000s, and now it is the public schools' turn. I think the future is figuring out how we fill the gaps, how we take advantage of some of the opportunities."