Serverless computing: Do we need to rethink the serverless framework?

With serverless computing, services will be constructed as components, microservices, linked together by dynamic policy intelligence

microservices
Credit: Thinkstock

Serverless computing is one of today’s hottest technology topics. Now that Amazon has announced AWS Lambda and Microsoft is previewing Azure Functions, the concept is becoming real.

Serverless is billed as a solution that dynamically creates cloud services to process events in an ephemeral container that are executed on your behalf as a backend-as-a-service. Instead of leasing a virtual machine, then writing and deploying your code, you get to use a new “pay-per-event” pricing model while leveraging a catalogue of executable functions (building blocks) to construct your own service. It is a DIY cloud deployment model that promises to allow clouds to be used the same way we have become accustomed to using mobile applications on our smartphones: simply access the app (“function”) you need at any moment.

+ Also on Network World: Serverless computing: How did we get here? +

In a serverless framework, developers should think of their services as being decoupled from the virtual machines upon which they execute, and they should only be concerned with the function they need for their service. In a way, this is analogous to how cloud applications are already being decoupled from physical infrastructure via virtualization, except that now we don’t even have to worry about virtual machines!

This is a giant leap in the evolution of cloud services. It suggests that virtual machines and containers are just infrastructure optimizations that too can be allocated and automated. Presumably, in a completely serverless environment, services can be instantiated anywhere in the cloud with full access to whatever data the service requires.

This implies a service architecture in which the storage and network resources are broadly accessible (by replication or remote access), and service address resolution is global, dynamic and instantaneous. This address resolution must map a service to a particular VM or physical server and ensure that the necessary infrastructure resources are available to be accessed at that moment. Today AWS Lamba and Azure Functions can’t quite do this broadly, but they do work as a backend-as-a-service for some very well-defined use cases (e.g., IoT) and some specific enterprise application flows.

Serverless services are infrastructure agnostic

If serverless services are truly our ultimate goal, is it necessary that they be built on top of virtual machines and containers? The answer depends on what you are trying to do. Architecturally, the concept is based on a Service-Oriented Architecture (SOA), so a serverless framework can be directly constructed over physical infrastructure, containers, virtual machines or a combination of these.

Regardless of which environment is used, one thing is universally common: Serverless services will be constructed as a list of microservices, linked together dynamically with policy enforcement intelligence. 

Further, each microservice becomes a small service of its own right in the sense that it represents an autonomous application unit that requires access to the small but specific compute, storage and network infrastructure it needs to be executed, regardless of where it is instantiated. Just as the microservice can be thought of as a unit of an application, the small compute, storage and network infrastructure it needs to be executed can be thought of as an autonomous “microexecution unit.” The concept of a microservice being a small part of an application and a microexecution unit being a small part of the infrastructure becomes a cornerstone of our serverless vision.

Since a serverless microservice can be executed over bare metal, container or virtual machine to ensure that it does not lose its “soft connection” to the resources it needs wherever executed, a new notion of infrastructure “resource resolution” must be implemented where a microservice links to its resources via resource descriptors that are logical abstractions, and resource resolution protocols then translate them to the right location where the information it needs is located, regardless of where the microservice is instantiated.

Fortunately, the notion of a logical resource abstraction already exists. Today, services can be accessed via a URL, which is a service logical end point. For instance, in Linux, once a server providing a service is reached, socket file descriptors are used to access the network it attaches to regardless of the physical network within which it exists. Likewise, a file descriptor can be used to access the files it requires wherever that file might be located.

Microexecution units

What this all means is that microservices do not act alone. Each microservice needs to be associated with a number of logical resource descriptors, which are moved as the microservice is moved, and the resource resolution protocols behind these logical descriptors almost instinctively find where resources are truly located. This is what I mean when I call the resource descriptors associated with a microservice a “microexecution unit.”

One may ask: If a microservice only controls logical resource descriptors, then whose responsibility is it to ensure that the physical resources that are resolved by the resource resolution protocols will be policy enforceable for traffic shaping, security, access control and authentication?

This is a very important question. In today’s VM- and container-dominated world, you might assume this job belongs to either the server operating system or the hypervisor. But neither the OS nor the hypervisor can really control, nor can they enforce, policies between microservices or from microservices logical resource descriptors into the infrastructure.

Fortunately, new initiatives such as the Contiv open source framework help here. Contiv rightly advocates that while containers have done a good job providing a framework to specify “application intents” relative to what the OS should expect, it has fallen short of the ability to specify “infrastructure intents” that are policy enforceable.

What does all of this mean?

While serverless computing and its associated framework are here to stay, today’s serverless services are derived from a cloud infrastructure that is based on virtual machines that don’t provide an appropriate foundation for execution. A broader framework is needed to expand the serverless movement to cover all services of the future. I think a framework based on a microexecution unit is worth our deep consideration.

This article is published as part of the IDG Contributor Network. Want to Join?

To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.