Like any kind of large-scale computing system deployment ever, the short answer to the question \u201cwhat should my fog compute deployment look like\u201d is going to be \u201cit varies.\u201d But since that\u2019s not a particularly useful piece of information, Cisco principal engineer and systems architect Chuck Byers gave an overview on Wednesday at the 2018 Fog World Congress of the many variables, both technical and organizational, that go into the design, care and feeding of a fog computing setup.\nByers offered both general tips about the architecture of fog computing systems, as well as slightly deeper dives into the specific areas that all fog computing deployments will have to address, including different types of hardware, networking protocols, and security.\n\nCompute options in fog settings\nComputation in fog settings often has multiple processor types, so it\u2019s a heterogeneous environment. RISC\/CISC CPUs, as made by ARM\/Intel, give great single-thread performance and a high degree of programmability. "They\u2019re always going to have an important place in fog networks, and almost every fog node will have at least a couple of cores of that class of CPU," Byers said.\nThey\u2019re far from the only options, however. Field-programmable gate arrays can be helpful in use cases where custom datapaths are used to accelerate workloads, and GPUs \u2013 as seen most commonly in gaming systems, but also in increasing profusion in the high-performance computing world \u2013 are great at handling tasks that need a lot of parallel processing.\n"Where a good RISC or CISC CPU may have a dozen cores, a big GPU may have a thousand cores,\u201d he said. \u201cAnd if your system and algorithms are amenable to parallel processing, GPUs are a very inexpensive and very power-efficient way to get lots and lots of bang for the buck.\u201d\nFinally, Tensor processing units, optimized to make machine learning and AI-based tasks easier, have obvious applications for applications that rely on that type of functionality.\nStorage in fog computing\nThere\u2019s a hierarchy of storage options for fog computing that runs from cheap but slow to fast and expensive. At the former end, that option is network-attached storage. A NAS offers huge storage volumes, particularly over a distributed network, but that means latency times measured in seconds or minutes. Rotating disks could work well for big media libraries or data archives, according to Byers, while providing substantially better response times.\nFurther up the hierarchy, flash storage, in the form of regular SSDs, provides much the same functionality as a spinning platter, with the well-known tradeoff in increased price-per-GB for much faster access times. That could work best for fast bulk storage, though Byers also notes that there are concerns about access speeds dropping off after a large enough number of read\/write cycles.\n\u201cAfter you write to a given address in the chip more than about 2,000 times, it starts getting harder to reprogram it, to the point where, eventually, you\u2019ll get write failures on that sector of flash drive,\u201d he said. \u201cSo you have to do a thing called layer leveling across the flash array, so that you write all of the addresses in the array about the same number of times \u2013 many times, the flash drive will manage that for you.\u201d\nLocal flash chips \u2013 those not set up in SSD-like arrays \u2013 are a good solution for security keys, tables and log files, and at the most expensive end of the spectrum, there\u2019s main memory. This is best suited for popular content, in-memory databases, and so on.\nNetwork options for fog computing\nNo such easily digestible hierarchy exists in the profusion of networking options available to fog architects, which are obviously split into wired and wireless categories, with the latter further bifurcated into licensed and unlicensed varieties.\nByers offered less concrete guidance on this score, saying \u201cchoose the ones that make sense to you.\u201d Wireless tends to be inexpensive and low-impact, and really the only option for a fog deployment that has to talk to mobile devices.\nLicensed wireless tends to be slightly better controlled, with less potential interference from outside sources, but licensing and\/or usage fees will obviously apply.\nAccording to Byers, however, wired tends to be preferable to wireless, where possible, because they\u2019re immune to interference and they don\u2019t use RF spectrum.\n\u201cWe like wireline networks, especially as you get closer to the cloud, because wireline networks tend to have more bandwidth and a lot less operational expense,\u201d he noted.\nFog computing software options\nThe key point where software is concerned, according to Byers, is modularity. Fog nodes linked together by standards-based APIs let users replace different components of their software stack without disrupting the rest of the system unduly.\n\u201cThe modular software philosophy really has to be compatible with the software development process,\u201d he said. \u201cSo if you\u2019re using open source, you might want to partition your software modules so that they\u2019re partitioned in the same way as your open source distribution of choice.\u201d\nSimilarly, if agile development methods are being used, choosing the size of the \u201cchunk\u201d that users break off can let them reprogram a component in a single development cycle.\nFog computing security\nSome fog systems \u2013 ones that aren\u2019t monitoring or controlling anything particularly critical \u2013 have \u201cfairly modest\u201d security requirements, according to Byers. Others, including those that have actuators capable of strongly affecting the physical world, are mission-critical.\nReactors, elevators, aircraft systems and the like \u201care going to kill people if they\u2019re hacked,\u201d so securing them is of the utmost importance. As well, government regulation is likely to impact those systems, so that\u2019s another facet for fog designers to keep an eye on.\nGeneral tips\nEnergy efficiency can come into play quickly in large enough fog computing systems, so it behooves designers to include low-power silicon, ambient cooling, selective powerdown modes wherever possible.\nIn a similar vein, Byers noted that functionality that doesn\u2019t actually need to be a part of the fog setup should be moved to the cloud wherever possible, so that the cloud\u2019s advantages of virtualization, orchestration and scalability can be realized to their fullest potential.