# Vendor Math

## Rules for Counting Packets

Math is a word that should never need any qualifiers, however, in our industry I have heard of Cabletron Math, Cisco Math, Vendor Math, etc. If the qualifier is needed it is because something is wrong.  Math is an absolute.I don't care who started it, everyone has their own egregious examples, no one is ever totally open about their specs, and everyone tries to hide weaknesses in their portfolio by creative 'specs-manship.'My favorite case of all time was the great work done on the Catalyst 5509.  If you remember it had three 1.2Gbps busses that ran the length of all nine linecard slots.  Their was a module that had twelve GbE interface, one to each of the three busses and nine front panel facing- it took two slots so the system could hold four of them.  This system was marketed at one time at a 54Gbps switch.  How?  12Gbps per distributed switching module, 4 of them = 48Gbps.  Three 1.2Gb busses = 3.6Gb.  Two 1GbE ports on the Supervisor III = 2Gb = 53.6Gb, round up for fun and poof = 54Gb Switch!Today I was reviewing another vendors specifications for a switch with 160Gbps per slot (they call it 320Gbps "Full Duplex") with sixteen slots.  It claims a 6.2Tbps "backplane speed."  If I take 16 slots @ 160Gbps per slot I get 2.56Tbps.  I can understand doubling this since all fabric based backplanes are full-duplex, so that equates to 5.12Tbps.  I am still missing 1.08Tbps.  A friend of mine commented that the extra terabit was used by the smoke generators and mirror array.  In my personal world-view I think we should stick to some terms and definitions that could make it easier for customers to figure out how well a system really performs.  Take this as a strawman/draft, am open to feedback.1) Per Slot Bandwidth-  single count this.  If I can run 40Gb into a slot and get 40Gbps out of a slot this is a 40Gb slot.  A 10Gb Ethernet is the same way, its not 20Gb because I can use it in both direction, make sense?2) Switch Fabrics-  ok, get creative and feel free to double this, it seems to be common to take a 12-port 10Gb switch and claim it has a 240Gb switching capacity.  This actually seems somewhat 'right' since it can move 240Gb in one second.  3) Local Switching- Don't get creative here.   Most everyone's linecard does some form of local switching, but with VOQ and fabric arbitration many vendors are moving to have the forwarding decision made locally and then still move the frames across the fabric even if on the same card.4) Packets Per Second- this is an absolute, don't add up each stage. You should roughly expect 14.8Mpps per 10Gb in a wire-rate forwarding engine.  5) Backplanes- we all want to talk about how well our chassis are built and how awesome our signal/noise ratio is on our copper traces, some companies even brag about how much copper their backplane has.  I can see stating what you feel the backplane capacity should be able to go to because this will help customers in knowing if that particular system 'has legs.'  Be sure to list this as 'future capacity' though and don't confuse people into thinking you are shipping that today if it will take new linecards, new fabrics, and the only thing maintained is a chassis with some connectors.5) Power Draw.  Give two numbers.  We all need to know the worst-case 55C power draw under full load otherwise the insurance companies and UL get a little particular.  But also show 50% load factor at ASHRAE published data center temperature guides.  This will help with thermal and power planning./end soapboxdg

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:
``` ```