Five years ago, IT was decentralized at the University of New Mexico. “Every school or college had their own IT, and in most cases they were completely under-resourced – a one-person shop having to do phones, apps, email, desktop, servers, storage, disaster recovery, all of that,” said Brian Pietrewicz, deputy CIO at University of New Mexico.
The university transitioned to a self-service model that enables each of its more than 100 departments to deploy infrastructure and application services itself and have them managed by the now-centralized IT team.
Adopting VMware’s vCloud Automation Center enabled departments to consume cloud resources, but also give the management team the ability to curtail that consumption if necessary.
+RELATED: Hot products at VMworld 2017+
“Going from physical machines to virtual machines to vCAC cut the provisioning time down from 12 weeks to three weeks to three days to 20 minutes, but obviously there’s a big gap in there – deploying network, deploying firewalls and the security components,” Pietrewicz said. “The key missing component was networking.”
What is network automation?
Traditionally, network provisioning and configuration management are manual, error-prone processes. Network virtualization enables the creation of networks in software, abstracted from the underlying physical hardware. IT can provision networks quickly, with network and security services attached to workloads using a policy-driven approach.
Automation takes things to the next level; network functions, including managing bandwidth, load balancing, and performing root cause analysis, are provisioned automatically based on predefined policies.
To eliminate the network bottleneck at the University of New Mexico, it deployed VMware’s NSX network virtualization platform and vRealize Automation cloud automation software. Pietrewicz talked about the university’s experience recently at the VMworld conference in Las Vegas. “It’s really the agility and automation piece that led us down the NSX path,” Pietrewicz said of the university’s reasons for adopting network virtualization.
Microsegmentation improves security
But beyond agility, NSX also enables microsegmentation, which represents a substantial improvement in security, he said.
NSX has been gaining traction as a security tool among companies that are interested in microsegmentation – separating individual workloads into different zones that are isolated from other segments and secured individually. Microsegmentation lets companies place virtual firewalls around servers to control the growing amount of traffic that’s moving laterally within data centers.
If breaches occur, microsegmentation limits potential lateral exploration of networks by hackers. NSX operates at the hypervisor layer, for agility. If a workload moves, the security policies and attributes move with it.
Sean Jabro, VMWare administrator at Intelligent Software Solutions (ISS), a Polaris Alpha company, echoed the need for speed of network provisioning. “Pre-NSX, we were not very good at automating anything. Our mean time to production with any kind of system was weeks, easily,” said Jabro, who also spoke at VMworld about his company’s automation efforts. “Our developers really wanted to start moving forward fast, and IT just could not keep up.”
Developers at ISS had moved to adopt a DevOps model, which requires an agile infrastructure that can handle constant changes. Networking was becoming a bottleneck to the speed of business. “We were not even close to being agile enough until we started really adopting some of these automation processes,” Jabro said.
Security, too, fueled ISS’ deployment of NSX. “We have a very heavy developer community at my company, and shadow IT is happening all over the place,” Jabro said. “So going with a product like NSX, to be able to really lock down our security posture inside while still allowing them the ability to spin up VMs in the environment and have automatic firewall rules in place to allow them to be as accessible as they need, right off the bat, is a huge deal for us.”
What to automate?
“When you think about the number of steps that occur between the time a VM is initially built to something that is in the end deployed with a network and a firewall – the hardest part is nailing down everything you have to do to get to that point,” Pietrewicz said. Processes can entail hundreds or even thousands of steps that cross roles, departments and systems.
The University of New Mexico has gotten to the point where it can deploy VMs with a base firewall rule set and a base network as part of the blueprint, Pietrewicz said. But the work isn’t done. A plethora of tech choices leads to more operational challenges.
“Where you used to have one or two options for firewall, now you have thousands. Tags and policies can go in any kind of direction,” Pietrewicz said. “When somebody says, ‘I need this port opened on this machine to this group of IPs,’ the number of tags, and the general flexibility of the product is making it so that right now, we are still trying to figure out exactly what our operations looks like, after that initial deployment. We keep having to bring everybody back in the room together to have the conversation – our security team, our platforms team, our network team – ‘what are we really doing here?’”
Greater standardization is imperative and can smooth deployment hurdles.
Going through the process of automating certain network options made it clear to IT leaders at IHS Markit that they needed to standardize more things in the environment, said Andrew Hrycaj, senior network operations specialist at IHS Markit, an information and analysis firm based in London.
“When you have to bring an automated component into your network, into your infrastructure, and you continually have to punch these things out, it forces you to create standardized processes so that people will follow them,” Hrycaj said. “And then, it creates a well-defined service offering. If your developers, your security – if everyone knows what they will get out of your infrastructure, then there are less questions.”
Realizing the potential of NSX to automate and secure networks isn’t easy, however. For starters, it requires a cultural shift.
“It’s not just a technological change, there’s also a people and process change involved in it,” said Scott Goodman, product marketing manager at VMware. “We’re used to operating in silos, and NSX starts to blur those lines and break down those barriers. So it can be a little challenging to figure out who, exactly, is going to do what.”
Goodman moderated a discussion among Jabro, Pietrewicz, and Hrycaj. All three panelists echoed Goodman’s warning about the cultural challenges required for network automation.
“Getting the network and the security guys together in same room, on same page, was probably the most difficult part,” Jabro said. “For us, it was more of a social change than anything else.”
“One of the biggest challenges that I didn’t expect was the pushback from the network administrators,” Pietrewicz said.
“From our perspective, it was a tough transition at first, because this is a brand new way of looking at networking,” Hrycaj said.
VMware’s NSX decouples security functions from the physical infrastructure and embeds them into the hypervisor, which allows security policies to travel with virtual workloads.
“The cool thing is that you get to change how you think about your security posture instead of just us network guys thinking about IP addresses and port numbers and that’s it,” Hrycaj said. “Once we got our heads around that, and we got into the room with the security team, we were able to take what may have seemed like unrealistic expectations in the past, and turn that into something that we could do in a short amount of time.”
But “it takes a lot of training and it takes a lot of talking,” Hrycaj said. Over time, “it has increased our engagement with security, which is a good thing.”