Once every few decades, we experience a broad shift in how people interact with computers. Think about it. How long have you been relying on your mouse to click on the things you want to interact with? In many ways, the typical user interfaces model hasn\u2019t changed much since 1984 but we\u2019re finally in the midst of a major new shift.\nWhat I\u2019m calling the fourth-gen user interface has arrived, and it will create a truly dramatic shift for users over the next few years. These new interfaces will leverage technologies like ubiquitous connected devices, location-based services, speech recognition, computer vision, biometrics and even augmented reality (AR).\u00a0 This isn\u2019t your dad\u2019s computing environment.\nSo, why am I calling this a fourth-gen experience? Let\u2019s look at the first three generations and then dive into the fourth.\nThe evolution of the user interface\nThe first-gen computer user interface of the 1950s and 1960s required humans to manually feed computers data in batches (think punch cards), and results were returned to the user through a printer. \u00a0My dad told me a story once about one of his early computer programming experiences where he literally dropped his computer program on the floor (a stack of punch cards) and it took hours to re-sort it.\nThe second-gen user interface evolved into drastically more flexible character-based systems, or Command Line Interfaces (CLI), however they demanded that the user understand a complex system of commands and syntax to be efficient. CLI are still highly effective for system admins and developers but they aren\u2019t a practical solution for the majority of end-users.\nToday, most people interact with the third-gen user interface: the standard graphical interface that empowers people to navigate on a screen by clicking on an on-screen object. Whether it\u2019s 1984-era Mac or the latest Windows 10, the experience hasn\u2019t changed extensively. In fact, the most used smart phone experiences still use this interaction mode. It\u2019s only recently that we\u2019re seeing advanced phone applications that are starting to show the way towards a new interaction model.\nThe interaction between users and the third-gen user interface is intuitive, however it\u2019s still quite manual. The speed at which interaction occurs is largely dependent on the users themselves. We\u2019ve been lacking contextual awareness and more dynamic interaction models. The fourth-gen user interface wave combines these elements to achieve new levels of productivity.\nThe rise of IoT and the 4th-gen user interface\nWe\u2019ve recently witnessed the birth and rise of a collection of industry trends that are beginning to mature rapidly: augmented reality (AR), machine learning (ML), Internet of Things (IoT), voice recognition, biometrics and facial recognition \u2013\u00a0 to name a few. Together, these technologies enable intelligent, context-aware systems that are capable of automatically adjusting and configuring themselves to anticipate and fulfill user needs.\nWe are seeing the rise of IoT devices in the consumer space. Chances are you\u2019ve heard of, or interacted with, Amazon Alexa, a Nest Thermostat, or an AR game like Pok\u00e9mon Go. The convenience and efficiency of these devices show the way, and yet they only represent a subset of the possibilities and applications enabled by the fourth-gen user interface.\nTake the context of a smart conference room as a business example. When you walk into your meeting, the system can determine who you are, what meeting you are there to join and which resources you need to get started. The IoT-enabled workspace launches the meeting automatically \u2013 shaving off the usual five minutes it takes your group to get everything up and running (and that\u2019s not counting any potential \u201ctechnical difficulties\u201d). However, that\u2019s not all \u2013 the room can also automatically adjust the meeting conditions to suit your personal preferences: lower the shades, turn the TV monitors on, initiate the video camera, record the session and email the links to the session recording out to all attendees when the meeting concludes.\nIn another real-world example, here at Citrix we recently retrofitted one of our office buildings with a new open floor plan.\u00a0 Employees can sit anywhere and log into their computer applications via shared devices at each desk.\u00a0 So, how do you find your co-workers when you need them or find free space for yourself?\u00a0 A map on a kiosk by the elevator is kept up to date with streamed sensor data so you know which spaces are available and you can ask to find a co-worker via voice command.\nAnd, of course, with all the data gathered in these two examples there is huge potential for analytics on how your company resources are being used!\nThe intersection of IoT and artificial intelligence (AI) technologies will become exponentially more powerful as the technology continues to mature and integrate with new systems, vendors, and applications.\nThis is the potential of the fourth-gen user interface \u2013 the potential that exists between our physical and virtual environments. This is the reality of the future of work. This is the reality of the technology we have today. As noted sci-fi author William Gibson has said, \u201cThe future is already here \u2014 it's just not very evenly distributed.\u201d\nInfrastructure requirements for the 4th-gen user interface\nTo truly succeed with the coming sophistication of IoT and fourth-gen user interface, your enterprise may need a new type of computing infrastructure. Think about all of the data and telemetry feeds that must continuously stream between different systems and sensors. Pure cloud-based infrastructures may struggle to keep up with this mix of high data-rates and demand sub-second user-response.\u00a0 That\u2019s why edge computing and hybrid cloud models are quickly becoming synonymous with achieving and maintaining a competitive advantage.\nBusinesses can\u2019t send all of this data, from an ever-increasing quantity of devices, to the cloud in an efficient or cost-effective way. They must decide what needs to go to the cloud for deep processing and what can be manipulated and analyzed at the edge more efficiently and quickly.\nPublic cloud infrastructures allow for flexible consumption models and faster rollouts. For certain application classes this is revolutionary. However, with the massive data wave coming from IoT you are often better off doing some processing locally before forwarding on a subset of the data to a remote cloud.\nIntelligent edge computing enables enterprises to pick and choose the hybrid architecture that results in the best of both worlds. That\u2019s because edge computing investments are designed to work in conjunction with the cloud; they represent local points of presence from applications that are cloud-driven.\nAs we continue to move to these more advanced, fourth-gen user activities, we must build an infrastructure that can intelligently, and automatically, decide where to most-efficiently process each component.\nTake the Nest Thermostat as an example of a device that works across a hybrid environment. Most of its day-to-day, minute-by-minute decisions are made locally. Limited, pre-processed telemetry data is streamed to the cloud where the machine learning algorithms kick-in to adjust your long-term energy usage. In addition, the public cloud offers an easily accessible control point for your phone to remote manage the system. The need for edge computing in such a hybrid cloud environment increases further with the complexity of the devices or processes involved.\nAs Dr. Tom Bradicich, HPE VP and GM of Servers and IoT Systems, explained in a recent blog post, there are several key reasons as to why intelligent edge systems will be critical in the enterprise including but not limited to, matters of latency, bandwidth, compliance, security, cost, duplication and data corruption.\nWhen, why and how to get started in the enterprise\nSo, when should enterprises operationalize IoT computing, fourth-gen user interface technologies, and edge computing? The answer is now.\nThese fourth-gen technologies are maturing on a similar timeline, and the first-to-adopt will be the ones who reap the most benefit. Timing is critical to ensure that IT can establish a sustainable competitive advantage \u2013 it doesn\u2019t matter what industry you\u2019re in.\nSecurity and privacy concerns also take center stage in a hybrid cloud environment. The strict regulations related to financial data, the private nature of personal or health related data, and the persistence of advanced cyberattacks reveal the need for a flexible system. Businesses must decide when to locally process and when to export data to the cloud, depending on the sensitivity of the data. The power of edge computing and a hybrid environment make this possible. When the stakes of a single breach are so drastic, there isn\u2019t room for undue risk.\nThe degree of telemetry data and analytical power these technologies produce is completely revolutionizing and accelerating how people can work. The implications for efficiency, productivity, collaboration, engagement, and innovation are huge.\nAre you looking for how to get started? If you\u2019re seeking an entry point, start with the IoT hubs that most major cloud providers maintain. These hubs, such as the Azure IoT hub, are essentially edge devices that collect local telemetry data for preprocessing before sending it to the cloud. Start the edge computing journey by exploring the IoT-related services that most cloud providers offer.\nThe fourth-gen user interface is already here, it\u2019s just not evenly distributed. Those who lag behind the industry leaders will soon be scrambling to catch up.