Multithreading weaves its way into netsBy Mario NemirovskyNetwork systems increasingly need to be application-aware to control access, allocate resources and prioritize traffic. Maintaining stateful packet flow information at gigabit per second line speeds requires a rate of random memory access that is beyond the capability of today's traditional processors. And ASICs, while fast, can't keep pace with constant changes in network protocols and applications.A new architectural approach for application-aware networks has demonstrated tangible benefits: massive multithreading (MMT). Understanding this technology is key to evaluating the next wave of network infrastructure.In the current generation of MMT processors, software threads typically correspond one-to-one to hardware threads, or streams. Threads are often organized into clusters, or tribes, to optimize resource utilization, and multiple tribes can be implemented in the same chip. Each tribe has access to its local external dynamic RAM (DRAM), as well as to a shared internal memory. The term pipeline (or core) refers to the physical circuitry that executes software instructions.Networking differs fundamentally from desktop computing because processing stateful packet flows requires frequent access to data with low locality. Locality involves the likelihood of having the required data or instruction available in the processor's current memory location. Because packets in a stateful flow arrive at random intervals, network equipment benefits little from PC-oriented multi-processors that depend on a high degree of locality for better performance. Low locality results in a high rate of requests for data that is not in cache, which increases latency beyond acceptable limits.To read more about massive multithreading click here.Nemirovsky is chief scientist for ConSentry Networks. He can be reached at email@example.com.