Inside AT&T’s grand plans for SDN

Frame relay and ATM go away as the company virtualizes more functions. A Q&A with the man driving the transformation.

sdn att
Credit: Shutterstock

AT&T spends some $20 billion per year on capital expenditures, the bulk of that on its massive network, and recently announced a bold plan to adopt Software Defined Networking and Network Function Virtualization in a big way. Network World Editor in Chief John Dix caught up with AT&T Senior Vice President of Architecture & Design Andre Fuetsch for a deeper dive on the grand plan.

Let’s start with some background on your role. As I understand it you lead a team of 2,000 engineers and computer scientists.

Basically I’m over the architecture and design organization and that includes AT&T’s advanced research organization, AT&T Labs. Our Foundry is also under my purview, which is basically an innovation program where we invite select vendors to come play in our sandbox and innovate new ideas. The bulk of my organization is architecture and design, as well as development. What we do is take the architectures we’re working on, prototype them, build them out, test them, and, if they look viable, scale them and put them into production.

AT&T Senior Vice President of Architecture & Design Andre Fuetsch

AT&T Senior Vice President of Architecture & Design Andre Fuetsch

You folks recently set a goal of software controlling 75% of the network by 2020. Can you expand on that.

Let me be clear about the goal. Our objective is to virtualize and control over 75% of our target network under our Domain 2 architecture by 2020. Of course there are parts of our network that we consider legacy that we’re going to migrate off of or retire. Those aren’t part of what we define as our target network, so we wouldn’t try to virtualize some of those older TDM-based products and services. That doesn’t make any sense since we’re moving to IP, so it’s really more about virtualizing the network functions that we see a future for.

When you say legacy, I presume you’re talking about things like 5E and 4ESS type switches on the voice side, but how about on the data side?

Those would be things like you described in the voice world as some of those old Class-5, Class-4 switching architectures, and an example in a data context would be ATM and frame relay. Those networks are also going away. However, Ethernet/IP, even into the optical layer, are the areas we’re focusing on, that we would virtualize and put under SDN control. What we’re looking at is taking those physical network functions and separating the software from the hardware and putting that software on our cloud. That gives us incredible flexibility, the ability to do much, much more with those network functions.

We’re still going to have layers of the network. You will still have an optical layer, a transport layer, the whole layered OSI stack, if you will. Wherever those functions are that we can virtualize, we will put them on a common cloud foundation and that’s where you have a real point of convergence.

The goal, I presume, is to realize internal efficiencies while also making new capabilities available to customers?

Right. It’s both. As background, look at our wireless data traffic as an example. Over the last seven years we’ve had over 50,000% growth just in the wireless data traffic across our network. So with that exponential growth we have a big motivation to make sure the cost curve to support that demand goes down. So economics is a big driver. But I would say equally, if not more importantly, it is also because we have this opportunity to transform these network functions and decompose them into software assets and software workloads, which will give us much more flexibility and control, and the ability to create new services and give more control to the customer. And of course there are going to be revenue opportunities with this transformation as well. 

What kind of internal efficiencies are you expecting?

I can’t give you specifics, but we are modeling it out and I’ll just say the amount of efficiency you can get is significant. It has to be. If you take a look at where the market is going and the competition for customers in terms of pricing plans and the continued rise in data consumption, especially video, you need a roadmap that gets you significant economic efficiencies in order to sustain the business.

Some people talk about Software Defined Networking and network function virtualization as being one and the same thing, while others say they are different animals. How do you see it?

They’re different things. Network functions that traditionally have been in boxes, like a router, can now operate as a software function that is controlled by a centralized master controller. That’s the SDN component of it. It’s more than just porting that piece of software out of that box and letting it operate on a highly commoditized cloud infrastructure. It’s also the ability to control that box differently. That’s really important because that will give us the abilities to do different things with that function.

By the way, it isn’t a one-to-one translation. It doesn’t mean this one physical network function becomes one virtual network function. It could be decomposed into multiple virtualized network functions that could be moved around differently. You could have them operating on this common cloud environment and then move that function closer to the user if that improved performance. So you get that flexibility, whereas before you would have to dispatch a technician that would physically have to go move that box closer to the customer. Now we can do that within our cloud environment in a much more controlled, automated and faster way.

Is it similar at all to the separation of the control plane from the data plane that was done eons ago with Signaling System 7?

That’s a great analogy. In the voice switching world the voice switches used to communicate inbound in the trunks and then SS7 came along and said, “Look. We’re going to take the signaling piece out and just utilize the trunks for the bearer path and signaling will basically occur across an overlay network.”

When we talk about SDN control, that is like a new control plane to control these network functions. In the event of a failure or a high volume event where changes need to occur, then you can control that on this new plane. So it gives you the flexibility to do things much differently. You get the option and flexibility of not just these network functions operating autonomously, but they can operate in a distributed fashion, as many of them do today, or in a centralized fashion or a hybrid of both. Again, this is about giving us more flexibility.

We talked about how this effort will benefit you internally, but what kind of changes can customers expect?

Earlier this year we launched our Network on Demand service, where we offer the customer control over their Layer 2 network. That means they can add more bandwidth or change the quality of service between sites by themselves. So, whereas typically this change would take days or weeks, now they can make that change within seconds and minutes. That’s a real differentiator. We’re using that as an example of an SDN controlled service. I can’t discuss specifics now, but we’re going to really take that to the next level in the coming year to give customers new capabilities and more control.

AT&T has said publicly that you’ll address internet and virtual private network services this way. Will those services ultimately replace frame and other data services?

Yes. If you look at the evolution of ATM and frame, what enterprise customers want is flexibility, consistency and ubiquity. Frame and ATM are fairly static services.   You don’t have the dynamic nature that, as an example, Network on Demand delivers. We believe that’s going to be a big differentiator for us: the ability to give customers control, the ability to use these services the way they want.

As an example, perhaps in the evening hours they want to do a large batch processing job or run backups, so they need larger pipes to do that, but during the day they don’t need those larger pipes nor do they want to pay for them. So they will have that flexibility, and we think this is a compelling proposition for the customer and for us internally, as well, because having those dynamic controls helps us better manage and better utilize the network so we can be more efficient, especially given that growing demand I mentioned earlier.

How does MPLS fit into in this whole equation?

MPLS is one of the cornerstones of our backbone and we see SDN control giving us even more capabilities, especially in the traffic engineering context. It gives us better granular control in how we groom traffic. We can route traffic around much more efficiently. The promise and capabilities we see with the future of SDN is really quite high.

We also begin to see a network where we don’t have to be so constrained by some of the older protocols like BGP and OSPF. One of the interesting forums we are participating in is ON.Lab, which is led by academia and looking way out on the horizon, looking at the future of networking and how fast it can change. And there are forums we're involved in like OpenDaylight, which more of the current vendor community is engaged in, and several others. I could go on and on.

What’s really interesting is this is a paradigm shift for us because in the past we would typically build with partners very vertical, monolithic solutions, like the 5-E and 4-E you mentioned. Now we’re looking at a whole new ecosystem beyond just the traditional vendor community; we’re looking at not just taking open-source software in and using it, but also contributing software back to the community. We think that’s part of this paradigm shift. No longer are we just focused on our traditional supply chain, we’re opening it up to the whole new open-source software world as well.

In terms of getting there, is this going to require forklift upgrades of existing infrastructure or can you tweak some of that to work in this new environment?

I think it’s going to be both. In some areas, some of the upper layer services were architected for the cloud. Those will be easier to tweak. Technologies that traditionally have been built on a proprietary stack from the hardware up, those will have to likely be forklifted out.

Sounds like exciting times.

It really is. I would say this is probably the most revolutionary change I’ve seen in my career for all the reasons we just talked about.


To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.