The Internet as we have all known it mirrors the design of old mainframes with dumb terminals: The data path is almost entirely geared toward data coming down the network from a central location. It doesn\u2019t matter if it\u2019s your iPhone or a green text terminal, the fast pipe has always been down, with relatively little data sent up.\nThe arrival of IoT threatens to turn that on its head. IoT will mean a massive flood of endpoint devices that are not consumers of data, but producers of it, data that must be processed and acted upon. That means sending lots of data back up a narrow pipe to data centers.\n\nFor example, an autonomous car may generate 4TB of data per day, mostly from its sensors, but 96% of that data is what is called true but irrelevant, according to Martin Olsen vice president, global edge and integrated solutions at Vertiv, a data center and cloud computing solutions provider. \u201cIt\u2019s that last 4% of what\u2019s not true that is the relevant piece. That\u2019s the data we want to take somewhere else,\u201d he said.\nSo does this mean a massive investment in rearchitecting your network for fatter pipes into the data center? Or can the advent of edge computing take the load off central data centers by doing much of the processing work at the edge of the network?\nWhat is edge computing?\nEdge computing is decentralized data processing specifically designed to handle data generated by the Internet of Things. In many cases, the compute equipment is stored in a physical container or module \u00a0about the size of a cargo shipping container, and it sits at the base of a cell tower, because that\u2019s where the data is coming from.\nEdge computing has mostly been to ingest, process, store and send data to cloud systems. It is the edge where the wheat is separated from the chaff and only relevant data is sent up the network.\nIf the 4% Olsen talks about can be processed at the edge of the network rather than in a central data center, it reduces bandwidth needs and allows for faster response than sending it up to the central server for processing. All of the major cloud providers \u2013 like AWS, Azure or Google Compute Engine \u2013 offer IoT services and process what is sent to them.\nIn many cases, the edge can perform that processing and discard the unneeded data. Since cloud providers charge by how much data they process, it is in the customer\u2019s financial interest to reduce the amount they send up for processing.\n\u201cWe need much more compute out at the edge of the network. This drives profound change, but interesting in that while we\u2019ll see far more data generated out at the edge, a very limited amount of it needs to travel very far,\u201d said Olsen.\n\u201cEdge data centers tend to aggregate data, and perform actuation functions to give an answer in low latency,\u201d said Jim Poole, vice president of business development for Equinix. \u201cWhat most companies are still doing is aggregating metadata from all these edge locations at a central location to do machine learning and analytics.\u201d\nPrashanth Shenoy, Cisco\u2019s vice president of marketing for enterprise networking and IoT, agrees that more computing should be pushed out to the edge.\n\u201cCompute has gotten cheaper and faster than the network, which suggests that compute should now be at the edge,\u201d he said. \u201cAlso, in cases where bandwidth is at a premium or users are in remote locations, like offshore or a mine, and you don\u2019t have connectivity, you need compute and analytics at the edge.\u201d\nArtificial intelligence in edge networks\nAnother important element to reducing the data load will be the use of artificial intelligence in edge networks, said Jeff Loucks, executive director at the center for tech, media and telecom at Deloitte.\n\u201cThe use of AI in edge networks will reduce data needed in data centers. When you think about all the data collected by an autonomous vehicle, even if you make the pipe bigger, that still makes a lot of data to be processed. So adding AI will be key to that,\u201d he said. \u201cWe\u2019re already seeing machine-learning algorithms in low-cost devices, like a security camera that can tell the difference between a cat and an intruder. We don\u2019t need high cost devices, just the algorithms on lower cost and more ubiquitous devices.\u201d\n5G wireless can help edge networks\nAnother element in making the IoT flood manageable will be the advent of 5G wireless technology. Wi-Fi is useful in some scenarios, such as Industrial IoT, where the gear is in a closed, relatively confined space such as a factory floor, and Wi-Fi access points can handle the traffic. But for many scenarios, Wi-Fi just doesn\u2019t provide enough range or throughput, although it could in the future with new high-speed protocols like 802.11ax.\n\nFor outdoor IoT, like autonomous vehicles or remote sites like industrial work sites or off-shore oil rigs, the cellular network is the network of choice for its range and bandwidth. That\u2019s why edge computing containers are placed at the site of a cellular tower.\n5G, currently in trials in the U.S with expected rollout beginning next year, was designed with business use in mind, as opposed to the more consumer focus of 3G and 4G. 5G is 20 times faster than 4G, with a peak download speed of 20Gbits\/sec vs. 1GBit\/sec for 4G.\n\u201c5G will be very helpful in increasing the amount of data that can be sent,\u201d said Loucks. \u201cThe pipe can be bigger, so it will increase the amount of data that can flow both ways. 5G also helps because it reduces latency, which will help industrial apps that require a lot of precision because they are so low latency. Where there have been latency problems 5G will help correct that.\u201d\n\u201c5G is absolutely key to making this architecture work,\u201d said Olsen. \u201cToday a very small part of Internet traffic goes over wireless networks because of bandwidth and latency. We are all far more mobile and would like to have more capacity. 4G is ill equipped to handle all this traffic and solve for speeds.\u201d\n\nBut Equinix\u2019s Poole isn\u2019t fully sold on 5G as a solution. \u201cThe industry hasn\u2019t shown the need for ultra-low-latency apps. Very few use cases need latency below 5 milliseconds. Never say never, but there is nothing viable in the market that needs that kind of latency,\u201d he said.\nPaying for edge networks\nThere are several challenges to moving compute to the edge, starting with the cost of edge-computing infrastructure. The edge-network containers that hold all the compute equipment aren\u2019t cheap, so the question is, who will pay for them?\n\u201cRight now the business model is not clear,\u201d said Olsen. \u201cThey have the eyeballs, but it\u2019s not clear how they make money off it. Maybe Uber or insurance companies can fund it to see how you are driving. But the biggest challenge is how do they monetize that.\u201d\nOlsen also thinks the data center will have to grow just to store all the data coming in, even if it\u2019s a sliver of what is generated. \u201cA lot of people say the edge is the end of cloud data centers but I would be pretty hard pressed to say that. There would be no reason to believe there wouldn\u2019t be a need for enterprise data centers,\u201d he said.\n\u201cThere will be net need for more. Even at a single-digit percentage of what is generated out at the network, when you get to things like security and privacy, all this [extra data] has to be stored somewhere. For long-term storage you go back and look at this data to do analysis,\u201d he added.\nPoole said some early adopters of edge computing are repurposing their data centers for long-term computational use. \u201cThe IT deployment model has been turned on its head. Now the edge is everywhere and the corporate data center is repurposed for long-cycle analytics. Financial services firms have moved their daily trading work to Equinix and use their own data center for long-cycle analytics, which they still have to do,\u201d he said.