Tiny, intelligent microelectronics should be used to perform as much sensor processing as possible on-chip rather than wasting resources by sending often un-needed, duplicated raw data to the cloud or computers. So say scientists behind new, machine-learning networks that aim to embed everything needed for artificial intelligence (AI) onto a processor.\n\u201cThis opens the door for many new applications, starting from real-time evaluation of sensor data,\u201d says\u00a0Fraunhofer Institute for Microelectronic Circuits and Systems\u00a0on its website. No delays sending unnecessary data onwards, along with speedy processing, means theoretically there is zero latency.\nPlus, on-microprocessor, self-learning means the embedded, or sensor, devices can self-calibrate. They can even be \u201ccompletely reconfigured to perform a totally different task afterwards,\u201d the institute says. \u201cAn embedded system with different tasks is possible.\u201d\n\nMuch internet of things (IoT) data sent through networks is redundant and wastes resources: a temperature reading taken every 10 minutes, say, when the ambient temperature hasn\u2019t changed, is one example. In fact, one only needs to know when the temperature has changed, and maybe then only when thresholds have been met.\nNeural network-on-sensor chip\nThe commercial German research organization says it\u2019s developing a specific RISC-V microprocessor with a special hardware accelerator designed for a brain-copying, artificial neural network (ANN) it has developed. The architecture could ultimately be suitable for the condition-monitoring or predictive sensors of the kind we will likely see more of in the industrial internet of things (IIoT).\n\n\n \n\n\nKey to Fraunhofer IMS\u2019s Artificial Intelligence for Embedded Systems (AIfES)\u00a0is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of \u201cconnectivity towards a cloud or a powerful and resource-hungry processing entity.\u201d But it still offers a \u201cfull AI mechanism, like independent learning,\u201d\nIt\u2019s \u201cdecentralized AI,\u201d says Fraunhofer IMS. "It\u2019s not focused towards big-data processing.\u201d\nIndeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved.\n\u201cIt is possible to build a network from small and adaptive systems that share tasks among themselves,\u201d\u00a0Fraunhofer IMS says.\nOther benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, \u201cno sensitive data needs to be transferred,\u201d Fraunhofer IMS explains.\nOther edge computing research\nThe Fraunhofer researchers aren\u2019t the only academics who believe entire networks become redundant with neuristor, brain-like AI chips. Binghamton University and Georgia Tech are working together on similar edge-oriented tech.\n\u201cThe idea is we want to have these chips that can do all the functioning in the chip, rather than messages back and forth with some sort of large server,\u201d Binghamton said on its website when I wrote about the university's work last year.\nOne of the advantages of no major communications linking: Not only don't you have to worry about internet resilience, but also that energy is saved creating the link. Energy efficiency is an ambition in the sensor world \u2014 replacing batteries is time consuming, expensive, and sometimes, in the case of remote locations, extremely difficult.\nMemory or storage for swaths of raw data awaiting transfer to be processed at a data center, or similar, doesn\u2019t have to be provided either \u2014 it\u2019s been processed at the source, so it can be discarded.