Enterprises operating in the traditional monolithic environment may have strict organizational structures. As a result, the requirement for security may restrain them from transitioning to a hybrid or cloud-native application deployment model.\nIn spite of the obvious difficulties, the majority of enterprises want to take advantage of cloud-native capabilities. Today, most entities are considering or evaluating cloud-native to enhance their customer\u2019s experience. In some cases, it is the ability to draw richer customer market analytics or to provide operational excellence.\nCloud-native is a key strategic agenda that allows customers to take advantage of many new capabilities and frameworks. It enables organizations to build and evolve going forward to gain an edge over their competitors.\nApplications are evolving\nLet's face it! Applications are evolving very quickly. Traditional applications are now complemented with additional cloud-native capabilities. We have traditional applications operating with the new containerized modular front-end or backend services.\nThe core application is still a 3-tier monolith, but the cloud-native services are bolted on to send data back to the private data center\u2019s core application.\nTransitioning goals\nIdeally, enterprises will have a security stance that they are happy with. They have firewalls, IDS\/IPS, WAFs, and segmentation: approaches that work perfectly well.\nAs we embark on cloud-native services, we need to add another layer of security. Enterprises must ensure they have equal or better security capabilities than before.\nThis creates a gap that needs to be filled. Transition involves the ability to maintain the coverage, visibility, and control, in traditional environments, while taking advantage of cloud-native services. All done with a zero trust security posture of default deny.\nThe complex environment\nObjectively, within the traditional environments, there are a variety of data center architectures that operate in public, private, hybrid and multi-cloud deployment models. Formally it was just private and public but now hybrid and multi-cloud are the conventional norms.\nThere is a vector transition that is occurring across the physical, cloud and application environment. This transition is highly dynamic and heterogeneous. In some future time, we are likely to have hybrid connectivity.\u00a0\nSecurity and hybrid cloud\nOne of the main focuses of hybrid connectivity is on interactions. It is common for large enterprises to have a little bit of everything. There will be applications in the cloud, on-premise, microservices, and monoliths. All these entities live and operate in silos.\nOne needs good coverage of every interaction between components within different architectures. For effective security, one should monitor for unexpected behaviors during the interactions. If this coverage is overlooked, then the door is open for compromise as these components communicate with other. Security will be the weakest link.\nThe traditional network approach\nThe traditional network approach is what everyone is familiar with and it is how the majority of security is implemented today. It is also the least flexible architecture, as security is tied to an IP address, VLAN or traditional 5-tuple. The traditional approach is an ineffective way to set security policy.\nBesides, networking is vendor specific. How you implement an ACL or VLAN will pose different configurations per vendor and in some cases, differences also exist within the same vendor. Some have evolved to Chef or Puppet but the majority of vendors are still doing CLI which is manual and error-prone.\nThe hypervisor\nFor the application, there is an attack surface that encompasses everything on the hypervisor. It is very extensive when you consider how many VM\u2019s can be placed on a hypervisor. The more the VM\u2019s, the larger is the blast radius.\nHence, there is the possibility of VM escape, where the compromise of one VM can result in a bad actor accessing all other VM\u2019s on that hypervisor. Essentially, a hypervisor can inadvertently magnify the attack surface.\nHost-based firewalls\nIn recent times, host-based firewalls made some improvements to security by preventing access of unwanted inbound traffic through the port number. Consequently, the attack surface and control is now located down at the workload level. However, we are still faced with the problem that the policy is carried out in a distributed manner.\nThe tools outlined above describe a variety of security approaches, all of which are widely implemented today. They are all necessary solutions that take you from a coarse to a fine-grained security model. However, the changeover to hybrid and cloud-native requires an even more fine-grained approach, which is called zero trust.\nThe next evolutionary phase\nWe are just arriving at a phase where the solutions for virtualized environments based on VM\u2019s in the public and private clouds are starting to mature. Therefore, as we reach this phase, we are already beginning to witness the next evolution.\nThe next step in the evolution of the DevOps led environments is based on containers and orchestration frameworks. This brings another order of magnitude to the complexity of the environment in terms of computing and networking.\nThe existing virtualized environments based on VM\u2019s will not be able to handle the complexity that the containerized environments present. So, what\u2019s the right way forward?\nNetwork and application independence\nThe security and compliance framework must be independent of the network. In a sense, they should operate like two ships passing in the night. Besides, they should be identity-based.\nThe key benefit of an identity-based solution is that you are getting visibility into service-to-service communication, which becomes the building blocks for authentication and access control.\nUniform security policies and adaptive scaling\nYou need the ability to cover a wide variety of possible combinations. It's not just about covering different kinds of infrastructures, you also need to cover the interactions between them, which can be implemented within very complex environments.\nIn the modern world, we have orchestration, containers, ephemeral, and dynamic services changing functionality with the capability to scale up and scale down. Therefore, the security solution should be adaptive with the underlying services whether it\u2019s scaling or evolving.\nAutomatically encrypt data in motion (mTLS)\nAs our applications are spanning both hybrid and multi-cloud deployment models, encryption becomes complicated with public key infrastructure (PKI) systems and the management of tokens.\n\n\n \n\n\nIf you have a solution that expands the application, physical and cloud environments, you will have an elastic and pervasive approach. Eventually, this enables you to encrypt data in motion without the overhead of managing complex PKI systems.\nZero Trust\nAny user, microservice, port or API can introduce vulnerabilities. This brings us back to zero trust - \u201cnever trust, always verify\u201d. The idea of moving the perimeter to protect the internal assets is a mandate for the zero trust security model. Perimeters will increase in number, becoming more granular and closer to the workload. Identity will be the new perimeter.\nHowever, all the assets that you needed to protect, when you were solely focusing on outbound services, are more complex in terms of magnitude when you now have to protect the internal assets. The current solutions simply will not scale to satisfy this level. We have an order of magnitude that is greater in scale than the environment that you need to protect.\nTherefore, having a solution in place that is designed ground up, to be as scalable as the data center and compute environment is mandatory. Zero trust becomes even more important during the transition period because you have more cross-talk between services and workloads deployed in different environments. Also, the medium between the environments can have a varying level of trust.\nHence, the decision to adopt the zero trust approach is a critical element to maintain an effective security posture during the transition phase. If you don't trust your perimeter, the only way to secure is by encrypting all the communication between the services.\nSample solution summary\nIdeally, the solution must offer a unique application-level security platform. A layer 7 solution enables the administrators to understand the \u201cwho\u201d and the \u201cwhen\u201d. Besides, it helps to figure out the services being used while having the application level capability leveraging NLP.\nA network that is independent and an application-centric security solution is the base value proposition. It covers a wide variety of virtual and network environments with a focus on the transition to cloud-native with a zero trust approach.\nWorkload centric, not network-centric, is what makes the solution more stable, readable and manageable. It cushions the use of automation to derive the least privilege policy from the observed activity.\nThis provides a full-circle approach of visualization, activity, policy and the differences between them. Workload centric policy cautions when an activity violates the policy and where policy might be too loose considering the observed activity. Policies should be set using logical attributes, not physical attributes.\nThe security platform ensures that you know exactly what\u2019s happening, while you undergo the transition from legacy to cloud-native application environments. It ascertains successfully, filling the gap for a secure and smooth migration.