Of the 47 enterprises I chatted with in December, guess how many were NOT users of hybrid cloud. Zero.\u00a0\nGuess how many ever used another cloud model. Zero.\u00a0\nGuess how many believe they will "move everything to the cloud". Zero.\u00a0\nOK, I realize that you may not have read this sort of thing very often or at all, but I think it demonstrates just how important hybrid cloud is and how little we really know about it. That\u2019s bad in that it\u2019s always a bad thing when something critical is hardly understood, but it could be a good thing for network professionals looking to engage again with their company IT planning process.\nOur thesis on network professional engagement in application planning is simple. Developers understand functionality and hosting requirements. IT operations people understand hosting and cost management. What network professionals understand is the workflows that bind all this into an experience. By focusing planning discussions on workflows, a network professional creates profound value for the enterprise, and nowhere is that more evident than in the hybrid cloud.\nIf you draw out the structure of a hybrid-cloud application, putting the user on the left, you\u2019d first draw a bidirectional arrow to a circle labeled \u201cCloud\u201d, then another from that circle to another circle (on the right) labeled \u201cData Center\u201d. That\u2019s the general layout of a hybrid cloud application. The user (who might be a worker, a partner, or a customer\/prospect) interacts with the cloud via a well-designed GUI. The cloud portion of the application turns this interaction into a transaction, and that goes to the data center.\u00a0 Something there (an application, a database) generates a result, which is then returned via the cloud GUI to the user.\nDon\u2019t get hung up on network-think here; remember that the goal is to think about the workflows the interactions create. Application design and componentization are slaves to workflows and interactions. The first point you want to make in a design meeting is that the best, most cost-effective, designs will be ones that limit back-and-forth interactions either from user to cloud or cloud to data center. Those are the two points in the diagram that need to be addressed first.\nStep 1: Minimize user-to-cloud traffic.\nUser-to-cloud interactions can multiply costs, complicate network connections, and eat quality of experience (QoE).\u00a0 Keeping the number of interactions as low as possible without compromising QoE is a starting point, but the real challenge is maximizing cloud benefits without risking massive cost overruns.\nThe value of the cloud lies in its ability to scale under load and replace failed components quickly, which is often an attribute of scalability. Scalability usually matters most for the application components that connect with the user and process those workflows.\u00a0 If you want scalability, you probably need some form of load balancing to divide work, but you also need to think about the problem of state.\nState is developer-speak for \u201cwhere you are in a multi-step dialog\u201d. Users will almost always have a multi-step interaction with the cloud, so it\u2019s critical to know what step you\u2019re in to process a message correctly. Many developers automatically think of doing this by handling each dialog step in a separate, small, component (a microservice) that\u2019s dedicated to that step. That will multiply the number of components, and with them the costs and the complexity of the cloud network. The alternative is state control, where either the user\u2019s app or a cloud database maintains the dialog state. That means a single microservice can handle multiple steps, perhaps the whole dialog, and that multiple users can share instances of each of the microservices. All of that will reduce the number of microservices and connections, which will reduce cost and latency.\nThe best way to start a discussion on this issue is to ask developers to map out how the workflows connect within the cloud. This process will quickly uncover issues with the number of microservices and connections, and open the question of how the application design could be optimized to address both problems. Often developers will see the problem quickly and understand how to fix it. And they\u2019ll remember who pointed it out in the first place!\nStep 2: Optimize use of MPLS and SD-WAN.\nNow look at the data center side.\u00a0 There are a lot of options for the relationship between cloud and data center, and most of them are bad. For example, having a cloud component \u201cread\u201d a database that\u2019s hosted in the data center is going to create a lot of traffic that you\u2019ll pay for, and a lot of per-access delay that will blow your QoE out of the water. Instead, you want to send the data center a single message that calls for all the database work and processing needed, and have it send back only the result.\nMost hybrid applications use the data center first for an \u201cinquiry\u201d to find a specific record or set of records (like products that match the user\u2019s interest), and then for an \u201cupdate\u201d that changes the status of one of the records (like a purchase). A great use for a cloud database is to hold onto the inquiry results as the user browses through options, and that eliminates the need to keep going back to the data center for another record. Doing that incurs traffic charges from the cloud provider, loads the network connection to the cloud, and increases the accumulated delay. When an update is made, the change is sent to the data center.\nOne question emerging from the data-center workflows is the role of the company VPN. Enterprises all rely on MPLS VPNs, sometimes augmented by or even replaced by SD-WAN VPNs. A connection to the data center could be made via the VPN or directly from the Internet to the data center. In the former case, it would be possible to extend the VPN to the cloud (incurring an extra cost), or to drop cloud traffic on one or more of the remote site locations, to be hauled back to the data center. This is usually an option where there are multiple geographic zones of cloud hosting. The best answer can always be determined by mapping out the workflows and exploring each option for cost and its contribution to latency.\nStep 3: Hone scalability and componentization.\nThe final step is defining the cloud workflows that will link the user interactions and the data center interactions, and this is where it\u2019s important to watch for \u201cexcess scalability\u201d and \u201cexcessive componentization\u201d. The databases in the data center will typically have a specific maximum transaction rate and specific limits to scalability. Most well-designed hybrid-cloud applications are highly scalable on the user side and less scalable as they move toward the data-center connection. You can identify excess scalability by looking at workflows between the cloud components that connect with users and those that connect with the data center.\nA network professional\u2019s role in application planning is cemented by workflows because workflows cement every aspect of every application. Every workflow is a cash flow from enterprise budgets to cloud providers, software and hardware vendors, and network providers. Every workflow adds latency, adds complexity, adds to development and maintenance time and costs. Inside the internet and the cloud, connectivity is implicit and that\u2019s led IT professionals to ignore workflows and their consequences because they \u201cjust work\u201d. Because network connections carry workflows, they tie networks and network professionals to applications, the cloud, information technology, and most important, to formal IT planning. Grab a bundle of workflows, and get ready to take your seat.