Over the years, we have embraced new technologies to find improved ways to build systems.\u00a0 As a result, today's infrastructures have undergone significant evolution. To keep pace with the arrival of new technologies, legacy is often combined with the new, but they do not always mesh well. Such fusion between ultra-modern and conventional has created drag in the overall solution, thereby, spawning tension between past and future in how things are secured.\nThe multi-tenant shared infrastructure of the cloud, container technologies like Docker and Kubernetes, and new architectures like microservices and serverless, while technically remarkable, increase complexity. Complexity is the number one enemy of security. Therefore, to be effectively aligned with adoption of these technologies, a new approach to security is required that does not depend on shifting infrastructure as the control point.\nCompanies like Cisco (Titration) and VMware (NSX-T) are extending their solutions to cover container networking and security. There has also been a springboard for new companies like Tigera launching new products in this area.\nChallenging IT landscape\nAdministrators have limited control in the cloud. Enabling cloud services surrenders the control of existing static environments that were previously controlled by local administrators with a well-defined perimeter and traditional security constructs. Cloud-native technologies are invisible to the traditional security perimeter and the traditional security controls such as filtering with IP addresses and ACLs on firewalls are no longer effective.\nThe classic 3-tiered architecture is now regularly broken down into many different application-programming interfaces (APIs), all operating in shared environments. A microservice architecture eliminates explicit ingress and egress points. Now, communication between the services is carried out with network calls and services exposed via many different internal and 3rd party APIs. The API is the new resource and all services now respond to API calls.\nIP address challenges\nIT organizations face challenges in the attempt to maintain traditional IP addresses and ACL entries in this rapidly evolving environment. The modern dynamic infrastructure invalidates the existing network security approach with assumes a relatively static configuration. ACL tables get large and eventually so complex that they become nearly impossible to manage effectively and their processing hammers the host performance. Layer 4 is coupled with the network topology and it lacks the flexibility to support agile applications. Also, the introduction of Network Address Translation (NAT) to the data path eliminates end-to-end visibility, which adds the second wrinkle to connections. It\u2019s a challenge to effectively identify and secure application endpoints with IP addresses. Therefore, IP identification leaves the administrators blindfolded. You have two options; you can either trust the network to do its job without having any control over it or introduce a new approach for resource identification and control that is less reliant on IP addresses.\nApplication identity and policy\nIP addresses are like home addresses, which creates a physical identity for the house and whatever is in it. We have used IP address as a proxy for identity of networked computing resources, but we have not had a reliable way to meaningfully identity the actual endpoint up until this point. The more you know about a person, the richer the identity of that person is. It is more than knowing just about the person\u2019s name, height, weight or age, and you certainly can\u2019t tell much about them by their home address. The concept of identity is imperative to every kind of authorization and authentication communication happening in the real world and on a network. Whenever there is an interaction, you need to confidently establish identity and securely authorize and authenticate the connection. The interaction could be any combination of communications between a user, an application or an API.\nIn zero-trust cloud environments where you have to assume that everything is accessible to anyone at any time, additional security is required for application components during the interactions on the network. It is not enough just to protect against external threats, but it is also necessary to fortify against internal vulnerabilities and configuration error. If a workload comes up with a specific identity, it should be security signed so there is no opportunity for tampering with its identity. As a part of the process, when the endpoint presents itself to another endpoint, it should present the signed identity with the certificate that is used to establish trusts. Cloud native applications are elastic and dynamically growing on demand. The dynamic nature of applications requires a new type of persistent identity. Traditionally, the application is configured with an IP address but in a dynamic environment where you do not have control over the infrastructure, the IP address is no longer a reliable or persistent way to recognize the application. However, if you give the application, service or API a persistent identity, you can recognize who this is.\nWhy is this important? Policy creation requires authorization and authentication of who is trying to communicate. The traditional model of writing policies is based on IP addresses but because IP addresses are no longer persistent, it becomes backbreaking if not impossible to frame the policies at scale. With a stable identity paradigm and ability to reliably identity application components, such as containers, microservices or APIs, security policy can be distributed with the application for real-time enforcement at scale and independent of the network. Persistent and attested identity eases policy enforcement across dynamic workloads and makes possible uniform security across multiple environments.\nDiscussing visibility\nWhen you examine networking, you have a flow with a source and destination. The source is subnet based, which is assigned to a workload, for example, a front end or back end tier. Upon examining the flow of information in the cloud, you will not be able to figure out the originator and the reason for endpoints trying to communicate. The adoption of cloud-native applications abandons control over the infrastructure where applications are being instantiated. From the visibility perspective, you do not have a persistent identity or a meaningful way of tracking service-to-service communications. However, if you give an application the persistent identity, you can quickly figure out who is the front or back end tier and why they are trying to communicate. Persistent identity improves visibility and compliance in the network.\nApplication identity\nWorkloads can be encapsulated in several ways such as a virtual machine (VM), bare metal or container. Container security solutions must evaluate the workload in a unique manner. How the workloads are protected is a secondary notion. However, the method of encapsulation is trivial.\nContainer security solutions application identity should decouple security from the network and the infrastructure so that it can scale. This approach enables the policies to be tied to the actual workloads identities. Protecting workloads with fine-grained, uniform security policies simplifies the security model without requiring any changes to business logic or network configuration.\u00a0 Now, all traffic between applications and components can be authenticated, authorized and transparently encrypted. As a result, these solutions fully protect the applications and eliminates any unauthorized lateral movements within the network.\nThe challenges discussed with network- and perimeter-centric security are no longer a liability because the security policy enforcement is not dependent on IP anymore. For example, if a database workload goes offline and returns a different IP address, it doesn't matter since the workload now has a persistent identity extracted from the metadata or other environment variables.\nHow is it built?\nWhen a protected workload is instantiated either through Docker or Kubernetes or as a process on a host or VM, the security solutions must examine all the relevant attributes of the workload to establish a unique application identity for the resource.\nThere are variety sources of metadata and security information used to establish the context of the application identity. This can be from the operating systems itself, for example, systemd and the metadata associated in terms of environment variables from the operating system. If it is a process on a host, they can examine the environment variables issued through a CLI command or the systemd process.\u00a0 Beyond that, data can be extracted from any other available source such as identity document (ID) documents from cloud service providers, CI\/CD pipeline metadata about the application, vulnerability information from internal or 3rd party vulnerability scanners and threat behavior information observed during runtime.\nIt\u2019s possible to embed a multi-attribute identity in application communications, this can be done by intercepting the 3-way handshake used to establish network connections between components. A module can sit in front of the applications TCP\/IP stack. The identity component is then injected into the TCP options of the SYN and SYN-ACK packets, empowering them to be infrastructure independent. As part of the 3-way handshake, a JSON Web Token is embedded to exchange the multi-attribute identity used to establish and enforce policy. When an incoming connection request comes in, front end modules examine the identity of the incoming request, checks policy to see if communication is permitted and simply drops the connection and the receiver never sees the intended connection. The recipient is cloaked against all type of malicious discovery such as unwarranted connections, attack probes or scans made by attackers.\nSo far, we have discussed controlling the access at the network layer i.e. the TCP Layer but what about the Hypertext Transfer Protocol (HTTP) Layer?\u00a0 In a microservices environment, the typical resource is an API. In this case, a HTTP Proxy that operates higher up in the stack can be deployed. For example, a service exposes a set of Uniform Resource Identifiers (URIs) with a specific scope. In this scenario of protecting an API, the use case is extended to the user who is trying to consume an API or service. Now, it\u2019s a service-to-service, a user-to-service and a service-to-user connection authenticated and authorized on the basis of a combination of the user and service identities, as permitted by policy.\nThe HTTP Proxy extends capability well beyond the network access control, up to the API access control, where the identity can be a user or a service.\nTo embrace the benefits of application disaggregation and cloud-native applications we must make changes to establish a unique application identity for the resource to how one identifies and secures application endpoints. This requires both technical and mindset changes.\nThe old way of using the network as a security control point is not only operationally challenging but also a security hazard. The combination of application identity with distributed policy enforcement model creates a security paradigm that efficiently implements uniform security across any infrastructure at scale.