- 10 Hot Big Data Startups to Watch
- 11 Unique Uses for Google Glass, Demonstrated by Celebs
- How to Export Your Google Reader Account
- How to Better Engage Millennials (and Why They Aren't Really so Different)
Page 2 of 3
Other ideas, according to Russell, include things like measuring the carbon footprint of a given stretch of road (by monitoring fuel consumption) and improved navigation and traffic avoidance.
BU professor Crovella, however, says that this hyper-connectivity is problematic, in particular at four central pain points. The first is the Internet protocol itself. The standard that makes a global Internet possible, as originally conceived, is essentially out of usable IP addresses, thanks to rapid growth.
"It was never conceived of that we would have multiple Internet protocol addresses for every single human being on the planet," he said. "And you can trace some of the decisions that have been made along the way as being somewhat suboptimal."
For instance, according to Crovella, MIT itself was given 16 million IP addresses. "They don't need 16 million addresses to run the university," he said. Fundamentally, however, the problem is a simple shortage of possible addresses under the IPv4 standard. The newer IPv6 standard ups the number of possible addresses from a little less than 4.3 billion to 3.4 x 10^38 - more than enough to meet even the wildest growth scenarios - but it's not backwards compatible with the earlier system, making the transition a headache.
The second problem, Crovella said, is transport control protocol, or TCP. This system, designed to address network congestion problems and improve reliability, has a seemingly minor issue that nonetheless complicates its use with wireless connections, which are increasingly prevalent.
TCP monitors connections for packet loss - when it encounters them, it assumes this means the network is congested and throttles traffic accordingly.
"The problem is, as we've seen, we're moving to a world in which most data is sourced or synched on a wireless network," said Crovella. "And wireless networks have different properties, and they lose packets for different reasons. A wireless network can lose a packet for reasons that have nothing to do with congestion."
What this means is that wireless packet loss due to, in Crovella's example, a microwave oven turning on, could prompt the TCP to assume the network is congested and act accordingly.
The third problem is a lack of security at the highest levels of the global Internet. The border gateway protocol that governs traffic between big ISPs has no built-in security, the professor said, a fact that has been exploited in several high-profile incidents, including the Pakistan/YouTube outage in 2008.
Finally, according to Crovella, there's a shortage of available wireless spectrum available for large-scale network projects, which means that existing frequencies may have to be repurposed and new auctions held.
"For example, the white space between television channels is probably going to be used for home networking, and we're going to try and dislodge the frequencies that have been used in the past, but aren't being used anymore," he said.