An academic researcher\u2019s talk on Monday at the Fog World Congress in San Francisco demonstrated both the limits of distributed computing structures and their critical importance to future IoT and augmented reality (AR) implementations.\nDr. Maria Gorlatova\u2019s recent work has centered on the study of fog and edge architecture \u2013 specifically, the way in which particular methods of architecting those systems can affect latency and response time. She's studying the differences in systems which are on- and off-campus, that have different points of execution, which seems like the academic way of saying \u201cwhere the computational work is done.\u201d\n\nThe difference between the cloud \u2013 a highly centralized architecture \u2013 and fog computing, the industry\u2019s current term of art for systems that have the abstracted nature of the cloud, but do their actual work much closer to the endpoint than the cloud\u2019s faraway data centers \u2013 is immense. Both fog and its close cousin edge computing are useful alternatives to the cloud architecture.\n\u201cFundamentally, our new devices that are generating high-bandwidth traffic and high-volume, high-velocity data just cannot afford to transfer all of the data to a centralized hub for processing,\u201d Gorlatova said.\nSome of the trade-offs, she said, are already fairly well-known. For instance, many tasks that aren\u2019t terribly demanding from a compute or network perspective are best accomplished at the edge, but the advantages in terms of latency are outweighed by the cloud\u2019s more potent computing capabilities for more complex tasks.\n\u201cWhen the task is small, the response time is dominated by the communication time, and the communication time is much smaller for edge systems,\u201d she said. \u201cOnce you talk about larger tasks, however, there are more resources in the cloud, so computing time becomes more of a component in response time and the cloud connection will be faster than the edge.\u201d\n\u201cWe also noted that connections to the cloud are much faster in on-campus conditions than they are in nearby residential areas, and this is well-known \u2013 connections from campuses to the cloud are optimized.\u201d\nIt\u2019s an important point for academic researchers, she noted. Testing systems in areas that might not have a university laboratory\u2019s optimized network connections yields results that are much more applicable to the real-world challenges faced by businesses.\nThe complexity of these systems makes them hard to study, according to Dr. Gorlatova. Each is different enough that it can be difficult to draw generalizations about an architecture\u2019s effect on response time without enough data points.\nSecure, responsive augmented reality\u00a0\nSome of the lessons from that research can seem self-evident, but they have wide-ranging implications. Gorlatova\u2019s example was the security problem posed by bad actors influencing augmented reality systems \u2013 for example, creating huge, obtrusive holograms that block a user\u2019s view of the real world, creating potentially serious safety issues.\nAugmented reality can be educational, useful to businesses and is set to become a mainstream technology, Gorlatova said, as soon as the headsets become smaller and more usable and the apps become slightly more sophisticated.\n\u201cThis is exactly where fog would come in, as fog can very surely address all these issues,\u201d she said.\nSolutions to the vision-blocking problem, which was first described a year ago, center on fixed policy recommendations, which have to be implemented, manually, by human beings. By applying machine learning to the problem, however, an AR system could be taught to recognize when holograms are obstructing the view of a user, and simply move them out of the way, or make them transparent.\n\u201cOverall, this level of intelligence in AR systems is above and beyond what current AR systems are capable of,\u201d she said. \u201cAnd we are actively exploring several ways to address it.\u201d\n\u201cFog offers a natural chokepoint for reducing the resources consumed on mobile nodes in multi-user settings, as well as a natural point for making those experiences more intelligent.\u201d She and her team are currently working on a fog-based pilot deployment for secure, responsive AR on Duke\u2019s campus, and hopes to have a system in place early next year.