Q&A: Stanford applies a clean slate to the Internet

Academics hope to develop suggestions to improve the ‘Net over time.

Nobody’s going to tear down the Internet and rebuild it from scratch, but academics at Stanford University are imagining what the new blueprint would look like if they did and they hope their work will lead to an Internet that works better in 20 years than it does today. So far the program, called Clean Slate Design for the Internet, has narrowed down what it considers four key problems that need to be addressed: establishing a sustainable economic model for the providers that own the Internet infrastructure; establishing a trust mechanism so people can know with certainty where traffic comes from; upgrading mobility from an annoying special case that isn’t handled well to a mainstream access mode; and improving performance. Clean Slate participants, who include representatives from networking vendors and service providers, met last week to discuss their progress. Network World Senior Editor Tim Greene talked to the leader of the effort, Associate Professor Nick McKeown, about how the project is going.

What’s the point of a new blueprint for the Internet in the first place?

You might think of it as a purely academic exercise: If we got the chance to start over, what would we do? The outcome of that could be a really well articulated blueprint that says this is how it should be. We could say, ‘This is what we should do and how do we get there?’

The other way is to say, ‘Where would we like it to be in 15 or 20 years?’ This is the approach we’re taking and as part of that thinking about how might you get there.

So where do you want to be?

Anything we say now is a little bit half baked or a tenth baked. There is a collective belief that it needs to be done and only a partial answer as to how.

First the infrastructure of the network needs to be economically sustainable. The problem is the network operators aren’t making any money from public Internet service. There are some good reasons for that. They were starting a business for which the marginal cost of providing service for an extra customer is zero. That makes it a natural monopoly because in a competitive market if the marginal cost of providing further services is zero then the competition is going to drive the price down to zero and everyone is going to go out of business unless they’ve already paid for their infrastructure. Who’s paid for their infrastructure is the biggest guy and he’s going to wind up with the monopoly.

It may be that the right outcome is to say we just accept that this is going to be a monopoly and see how within that environment do we make it work well.

What about trust?

We want a network that is trustworthy and within that I would include security. Denial of service, viruses, worms and to some extent spam are consequences of the Internet architecture. They’re not inherent in any communication mechanism. They’re hard to solve because it’s very hard to determine the origin of packets. How do you tie a person to the data that is sent?

We are thinking of removing the Ethernet access switches and replacing them with switches that contain a flow table and basically nothing else. If a packet arrives and it’s not in the flow table it will send it off to a centralized controller. The routing decision is made by the centralized controller, the accepting of the flow is made by the controller’s policy decision. This gives administrators centralized control over what flows are allowed on the network, which would reduce the ability of viruses to spread, for example.

What do you suggest to make Internet use easier for mobile devices?

Mobility is the area we have done the lease work on. If you want everyone to be able to move about and update their Palm without interruption, the Internet makes that very hard to do today.

What do you recommend?

We’re trying to figure out where to start.

What can be done to boost Internet performance?

Routers at the core of the Internet process every packet, yet are connected in topology to only two or three neighbors. They process every packet to determine if it goes east, west, north or south. That’s an awful lot of processing to make a simple decision. If instead of processing every packet you route in a more aggregated way it will allow us to take advantage of optical switches. A whole load of little routers, called boundary routers, sit near the big routers. They’re the ones aggregating all the traffic coming up from the users. They already know where they want these packets to go. So instead you could put a simple, small, almost passive optical switch that would sit in place of that big router. And the edge routers could set up an [optical] circuit across the Internet to another edge router to which they have a lot of traffic to send.

What about the core of the Internet?

We’re literally just starting on thinking about that now. In the public Internet you have people and you don’t know reliably who they are. Should we make it so you do know who they are? Or should we force them to reveal who they are when it matters? I don’t think there’s any reason to suspect that you would have the same solution for the public versus the private. And if you believe there’s only two options - public and private - maybe that’s not such a bad option.

We’re in the midst of a very rapid growth of the Internet caused by massive amounts of video coming onto the network. Imagine if everyone in the U.S. was watching four hours of television per day and all of that four hours of television was being delivered as HDTV over the network. The amount of capacity you’d need would be many orders of magnitude bigger than the Internet is today. That is where we are headed.

The question is could you just scale up the network the way it is today? Technically, probably. It would mean the Internet exchanges would need 10 to 20 times as much space, the routers are going to have to have 10 to 20 times the capacity, which means much more power, and it’s very difficult to see how all that technology could scale in a way that you could say, yes, that actually makes sense.

It all starts to look suspiciously fragile - no fragile’s not the right word. It just doesn’t seem very realistic that you could get there that way.

So what would you do?

If you ask the router companies what is their biggest problem right now their answer is power. That’s their biggest constraint right now. Optical switches consume comparatively no power and they scale to unbelievable numbers. Electronics in routers are stuck with Moore’s Law at best. [With routing decision pushed the edge] the core of the network lends itself very nicely to fast dynamic optical circuit switching where the circuits come and go fairly quickly, perhaps every few minutes or so. They’d be established and taken down between the routers in a way that allows aggregating all of this traffic. This I believe will lead to a much more easily scaled core of the network, for these exchange points, and a lot of this. This will enable the network to scale much faster in the way that is needed.

Learn more about this topic

Stanford to launch Internet study group

Stanford researchers scheming to rebuild Internet from scratch

Smarter handhelds could bolster wireless, mobility

Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies