Robots and IoT devices are similar in that they both rely on sensors to understand their environment, rapidly process large streams of data and decide how to respond.\nThat\u2019s where the similarities end. Most IoT applications handle well-defined tasks, whereas robots autonomously handle anticipated situations. Let\u2019s consider both from six different vectors:\n1. Sensor\n\nIoT \u2013 Binary output from stationary sensor. \u201cIs the door open or closed?\u201d\nRobots \u2013 Complex output from multiple sensors. \u201cWhat is in front of me? How do I navigate around it?\u201d\n\n2. Processing\n\nIoT \u2013 Simple data stream of signals handled with well-known programming methods.\nRobots \u2013 Large complex data streams handled by neural network computing.\n\n3. Mobility\n\nIoT \u2013 Sensors are stationary and signal processing is done in the cloud.\nRobots \u2013 The sensor laden robot is mobile and signal processing is done locally and autonomously.\n\n4. Response\n\nIoT \u2013 The action to take in response to a situation is well defined.\nRobots \u2013 Multiple actions could be taken in response to a situation.\n\n5. Learning\n\nIoT \u2013 The application typically does not \u2018evolve\u2019 on its own and develop new features.\nRobots \u2013 Machine learning and other techniques are used to let the robots \u2018learn\u2019 and increase their capacity to deal with new situations. E.g. self-driving cars collectively get smarter as more situations are deal with.\n\n6. Design\n\nIoT \u2013 Stationary sensors. Processing done centrally where power is readily available. Need for communication channels between sensor and the cloud.\nRobots \u2013 Weight, size and power demand are important design considerations. Communication capability is less important.\n\nTopology\nIoT applications are centralized with edge devices with little intelligence of their own. Low cost sensors transmit signals to a control center in the cloud which analyzes the data stream and decides the action to take. The cost of the central hub can be amortized over thousands of sensor-based applications making IoT applications more affordable. Network connectivity and latency limit the range of applications that IoT can meet.\nRobot and drones operate in a decentralized mode. They have a high degree of decision-making capability of their own and can function on their own even if they are disconnected. Robots typically only share details on unexpected situations they encounter. This allows their algorithms to be refined by applying machine learning to collected feedback.\nThinking is hard\nConsider what happens when you pick up a pen. Your eyes scan your surroundings and and your brain identifies the pen through pattern matching. Signals are sent through nerves to your arm muscles directing them to move to the pen. Visual signals from your eyes provide continuous feedback on your hand\u2019s position to move it precisely to the pen. Tactile feedback from your hand confirms when the pen has been picked up. A great deal of signal processing and continuous motor control for such a simple task!\nProgramming a robot to do the same task requires visual sensors (cameras) to provide continuous visual input, a graphical processing unit (GPU) to processes the stream of visual signals and a central processing unit (CPU) to control the motor functions.\nRobots rely on multiple high-resolution sensors that generate complex data streams. This requires a lot more processing power and multiple neural networks to process them in parallel. \u201cNeural networks are loosely modeled on the human brain with thousands of small processing units (neurons) arranged in layers. They identify patterns based on a learning rule.\u201d\nLearning as you go\nIoT devices are generally designed to handle specific tasks. This could be as simple as a sensor detecting if a door is open or not and the central hub sending an alert to notify the owner that the door is open. Robots need to react to unexpected conditions that their developer may not have anticipated. This could be such as how to navigate around an obstacle in their path.\nArtificial Intelligence (AI) platforms and machine learning help robots deal with such situations. They get progressively \u2018smarter\u2019 as more robots are deployed and share unexpected situations they encounter.\nSystems design\nHardware costs for designing robots are declining as their processing power increases. The Jetson Nano Developer Kit to build robots costs $99 and runs multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. It includes an NVIDIA CUDA-X\u2122 AI computer which delivers 472 GFLOPS of compute performance for AI workloads on 5 watts power. This enables a robot to work longer before requiring a recharge.\nProgramming a robot requires specialized software. Developers break down complex robotic tasks into a network of smaller, simpler steps. This is done with computational graphs and an entity component system, such as the Isaac Robot Engine. The robotic application is built with smaller modules (Gems) for sensing, planning, and actuation. They let robots handle obstacle detection, stereo depth estimation, and human speech recognition.\nTeach your robots well\nRobots like humans improve their motor skills with practice. Robots need a test bed where their instructions can be tested and debugged. Simulated test beds are better than physical ones as it is impossible to create a physical representation of every environment where the robot might operate. Isaac Sim is a virtual robotics laboratory and a high-fidelity 3D world simulator. Developers train and test their robots in a detailed, realistic simulation reducing the costs and development time.\nRobots improve as their decision models are revised to cover new situations that they encounter. Robots operate based on models they were programmed with, but they also send details of unexpected situations back to the cloud for review. This enables developers to refine the robot\u2019s decision-making model to deal with the new conditions. The amount of feedback increases as more robots are deployed, increasing the speed at which all the robots collectively get \u201csmarter.\u201d\nNVIDIA Nano based robots can report new conditions they encounter to AWS IoT Greengrass modeling platform which lets them act locally on the data they generate, while still using the cloud for management, analytics, and storage. The robots can run AWS Lambda functions, execute predictions based on machine learning models, keep device data in sync, and communicate with other devices securely \u2013 even when not connected to the internet.\nIoT applications now encompass both centralized and autonomous applications. Stationary and mobile ones. Some that stick to their program and others that learn and evolve.