I ran into Tony Hain at NANOG last week, and he told me he had gotten pounded with questions about IPv4 address exhaustion after my post discussing his and Geoff Huston’s studies. His response to them was something to the effect of, “Jeff just repeated what I’ve been saying for years. Why are you asking now?” My response to Tony was “Jeez, I didn’t think anyone was reading my posts!” I think I’ve gotten more comments since I stopped posting than I did when the blog was active. Anyway, I decided to start it up again.

And a good place to start is with a request from Murilo, in response to the last post:

I would like to ask you if you could speak a bit about EIGRP vs OSPF. Both are IGP protocols and if you have a network only with Cisco routers what is the best option?

With apologies to my friends at Cisco, I have to say that I’ve never recommended EIGRP to any of my clients. I’ve worked with many who have already made up their mind in favor of EIGRP and I’ve acquiesced to their wishes, but if I’m asked I adamantly recommend OSPF.

For years I’ve referred to EIGRP as a consultant’s best friend. Its easy to configure, doesn’t require you to think much about your network topology, and works very well in networks up to a certain size. Just slap another router in the network as needed, turn on EIGRP, and you’re done. But then when your network grows large enough to need some scaling limits, forcing you to finally think about your topology, untangling EIGRP can be daunting. That’s when many operators call a consultant like me, who is happy to come in and implement an EIGRP to OSPF migration project for lots and lots of money. So in that mercenary way I’m quite fond of the protocol.

The primary scaling limitation with EIGRP is that it doesn’t have a capability for setting internal boundaries, important for controlling prefix summarization and database sizes, the way OSPF areas do. You can artificially do this by using multiple EIGRP processes, but why use a kludge to accomplish something OSPF does as an integral part of the protocol?

The above is not to say areas are always a good thing, either. An interesting phenomenon I’ve observed over the years is that while EIGRP networks tend to get out of control because they remain a single, flat domain as the network expands, many OSPF designers go to the other extreme and overuse areas. I’ve seen networks of 50 or so OSPF routers, which would operate just fine as one big area, needlessly divided into more than a dozen areas.

Where EIGRP scaling problems usually become evident is with stuck-in-active (SIA) conditions, in which responses to queries are not heard within a certain time, causing neighbors to be incorrectly flushed from the neighbor table, resulting in severe network destabilization. SIAs should not happen even in very large networks, but once again because you don’t have to think much about growing EIGRP topologies you can get yourself into a situation where they do in fact occur. Cisco has added some optimizations in recent years to help prevent SIAs, but they still happen.

EIGRP does have summarization capabilities, but again it doesn’t make you think too much about your topology, which again can get you into trouble as the topology grows. But all this stuff about being forced to think about your topology begs the question: If you choose OSPF at the start because you are considering where your topology might be in five years, then you are aware enough to build an EIGRP topology that would also scale.

And then there’s DUAL. The algorithm is lots of fun to study and to write about, but its not so fun when you’re in the middle of a serious network outage. It just isn’t as easy to understand as OSPF, and can lead to some lengthy head-scratching when trying to figure out an intricate network behavior.

Last is, of course, the “proprietary protocol” thing. Yeah, yeah, you only have Cisco in your network and always will have only Cisco in your network, so this isn’t an issue. Cisco certainly wants you to see it that way. But are you sure? It makes no sense to consciously lock yourself out of future options; if start-up Murilo Network Systems comes out with a 5-pound, $100 terabit router, you might change your mind.

Far more important in the proprietary versus open protocol debate, however, is reliability and security. It’s true that many vendors add their own proprietary tweaks to their OSPF implementations, making them somewhat less open. But all in all you’ve got the eyes of a host of vendors and the entire IETF community on OSPF, with everyone understanding its inner workings and contributing to its improvement. With EIGRP you’re dependent on a single vendor to get it right.

Cisco has some of the best protocol coders in the world, and I’d trust their work over many lesser vendors. But given the choice, I’d rather not have to trust anyone more than necessary. 

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2007 IDG Communications, Inc.