As Comcast’s reversal this week shows, traffic-management practices that target individual protocols are increasingly going out of fashion.
One month after Comcast aggressively defended its targeted use of TCP reset packets to delay or stop BitTorrent uploads at an FCC hearing last month, the company has reversed course and says it will stop targeting individual peer-to-peer (P2P)protocols when managing network traffic.
Comcast’s reversal came as a welcome development for BitTorrent, which had argued against the ISP’s techniques at last month’s FCC hearing on broadband network management practices. Ashwin Navin, BitTorrent’s president and co-founder, says his company has been negotiating with Comcast for more than two years on traffic-management issues, and that the recent media attention to Comcast’s traffic-management practices has served as “a catalyst” to announce the two companies’ collaboration. In return for Comcast’s cooperation, he says that BitTorrent will work with other companies to develop more-efficient P2P technology that will place less of a burden on network architecture.
“We are particularly enthusiastic about Comcast’s commitment to make their network management protocol agnostic -- neutral to all applications -- as well as their efforts to upgrade broadband speeds for both downstream and upstream traffic,” Navin says. “We will optimize our application to take advantage of their network upgrades and share those techniques with the broader Internet community.”
Marty Lafferty, president of the Distributed Computing Industry Association, says this newly announced collaboration between Comcast and BitTorrent was inevitable given the ever-increasing consumer and business demand for bandwidth and the potential of P2P protocols to deliver large files rapidly over the Web.
“I think we’re at the point right now where the advantages of P2P technology are so enormous that it has to go forward,” says Lafferty, whose organization sponsors the P4P working group that is working with ISPs and P2P companies to optimize P2P content delivery. “P2P is not an individual technology, but rather a set of practices that will enable ISPs to ensure the most-efficient possible delivery of payloads for their customers.”
Typically, P2P technology such as BitTorrent distributes large data files by breaking them up into small pieces and sending them through multiple sources. After all the data is received, the file is reassembled as a whole. While this method of file sharing is much faster and more efficient than relying upon one centralized server, it can cause traffic-management problems for ISPs because P2P protocols are mainly designed to download large chunks of data from sources wherever they can be found, and without particular regard to network efficiency.
This has led some ISPs to use controversial methods to either slow or stop P2P traffic on their networks. Last year, for instance, the Associated Press reported that Comcast has been employing technology that is activated when a user attempts to share a complete file with another user through peer-to-peer technology such as BitTorrent and Gnutella. As the user is uploading the file, Comcast sends a message through TCP RST packets to both the uploader and the downloader telling them that there has been an error within the network and that a new connection must be established. Because the message sent to users does not appear to be sent directly from Comcast, many critics have accused Comcast of sending forged or spoofed packets that they say are deceiving to consumers.
But as Comcast’s reversal this week shows, such traffic-management practices are increasingly going out of fashion, and many ISPs already have a policy of not targeting individual protocols when they manage traffic. Jeffrey Sopha, manager of network development for wireless technology for Sprint Nextel, says that Sprint doesn’t believe that it is in position to “police the Internet” and that his company works to add capacity during peak times rather than slow targeted applications.
“We recognize that wireless is following the trends that wireline has followed for peer-to-peer traffic,” he says. “So we take measurements in real time at various points throughout the network and we determine when it’s the appropriate time to add more radio capacity, firewall capacity and so forth.”
At last month’s FCC hearing, Tom Tauke, Verizon’s executive vice president for public affairs, policy and communications, also said that his company did not use RST TCP packets to manage P2P traffic “because of the capacity of the network that we’re currently deploying.”
However, he also noted that the rapid growth of high bandwidth-consuming applications made it very difficult for ISPs to determine just how much to invest in building out capacity. To mitigate these problems, many ISPs and P2P vendors have started to look at ways to make P2P architecture more sensitive to network needs. Some of the most high-profile ideas have come from the DCIA’s P4P working group, which already includes major players such as Verizon, AT&T, Comcast, BitTorrent and Pando.
Last week, the group announced that it had successfully tested experimental P2P software developed by researchers at Yale University that the group says could eliminate many of the headaches that P2P systems have traditionally caused ISPs. Rather than taking data from wherever it’s available, the new system actively directs file sharing among multiple users and puts far less strain on network capacity. Haiyong Xie, a researcher who helped develop the software while a Ph.D. student at Yale, says that the protocol uses an ISP’s topology map to make suggestions for which cloud of P2P clients should peer with other clouds of clients.
“Suppose we have clients trying to peer with one another in three different cities – New York, Boston and Washington, D.C,” he says. “After analyzing the data provided by the network topology map, then the iTracker may tell the appTracker that . . . it would be optimal for the New York users to peer with the D.C. users 90% of the time, and with the Boston users 10% of the time.”
And this is only one of the projects that P4P members have been working on. content delivery network vendor Velocix, for example, recently unveiled a hybrid-P2P protocol for live video streaming that relies both upon traditional peers and also on content that has been cached on Velocix’s CDN. Thus, while the system relies primarily on multiple cache servers to deliver video streaming, it also can also accelerate content delivery by adding peer-to-peer sharing. Broadband media delivery company PeerApp, meanwhile, has developed a protocol to cache P2P content at the edge of the network that the company says allows ISPs to more easily manage their traffic by generating additional bandwidth during peak hours. Thus, P2P users can download their files from cache servers during peak hours rather than relying exclusively on peers.
Gartner analyst Mike McGuire says that such innovation will be a terrific asset for the growth of P2P technology, as it will make more ISPs willing to tolerate high volumes of file sharing on their networks. He also says that as P2P technologies continue to develop, more ISPs will look at them as important content-distribution tools rather than threats to their networks.
“Five years ago, it was very easy for ISPs to say, ‘We have to crack down on piracy’ and leave it at that,” he says of ISPs’ past attitudes toward P2P. “At the time, we were telling them to not assume that all P2P architectures are the enemies of their businesses. . . . These technologies aren’t going away and you can’t sue them out of existence.”
Irwin Lazar, an analyst at Nemertes Research, notes that if P2P protocols can overcome their reputation as tools for piracy and make inroads as legitimate content delivery applications, then more ISPs will be willing to invest in optimizing them for their networks.
"The music industry fought illegal file sharing for so long, but they eventually let Apple roll out iTunes, which showed that people would pay for music online if it was offered at a reasonable price and was relatively easy to download," he says. "If P2P systems can do the same thing for, say, HD video, then ISPs are going to have to use some kind of P2P storage system."