Americas

  • United States
Contributor

Mixing and matching massively hybrid applications: the future of cloud

Opinion
Oct 12, 20176 mins
Cloud ComputingHybrid CloudIaaS

Numerous hybrid cloud studies tell us that companies aren’t choosing just one IaaS vendor. Instead, they’re choosing multiple, but they tend to think of the building blocks of choosing more than one as full applications. Really sophisticated shops are taking traditional three-tier applications and running the databases one place and the presentation tiers elsewhere, thereby creating hybrid applications.

keys to access solutions hand digital world
Credit: Thinkstock

A year ago, IDC told us that 68 percent of organizations have adopted cloud for enterprise applications—and that it’s not just about cost, but about revenue increases as well. That study also says that 73 percent of respondents, who spanned both IT and line-of-business users, have a hybrid cloud strategy in place.

But when you dig further into those numbers, you’ll find that to most those respondents, “hybrid” means “subscribing to multiple external cloud services.” This can mean some applications in a portfolio run on one cloud while others run on a different cloud. To another 47 percent of those surveyed, “hybrid” means “using a mix of public cloud services and dedicated assets,” which conjures an image of a database running on-site sending data to a web or application server on a public cloud.

Those are nice definitions, but that’s not the future of cloud.

Application architectures have reached a point where massively hybrid applications are now possible, meaning those that mix and match from best of breed services on different public clouds. Imagine writing a text file on AWS S3 that causes the Azure text-to-speech service to generate an MP3 file, which gets written to IBM Bluemix object storage that hosts your website. The public cloud services are becoming more interchangeable in this way, and data gravity issues that are a relic of older, monolithic applications are becoming less important.

How close is this, really, and how did we get here?

From monoliths to microservices

For the last 30 years, software engineers have been breaking down application architectures into smaller and smaller components that are easier to iterate over. There’s a great examination of this from Kim Clark on IBM’s developerWorks site entitled “Microservices, SOA, and APIs: Friends or enemies?” In particular, there are two diagrams I love from that article. The first illustrates the power of breaking up a siloed component into microservices.

What is great about this diagram is how visually obvious it makes the argument for microservices that can be individually scaled instead of scaling the entire siloed component. Put in a larger context, Kim shows us a second diagram depicting the evolution of a set of application functionality.

The focus of discussions like this are often the business logic components at the top of the diagram, as their access APIs remain the same, but they become individually scalable as microservices by organizing them differently. Just as important, though, is how information in the databases is represented. Apart from security, the No. 1 issue cited as a barrier to early cloud adoption was data gravity. Databases were just too big to move from private data centers up to the cloud.

But an important distinction this diagram makes is that, as the business logic becomes more composable and more loosely coupled, so does its underlying data. What that really means is that the notion of state that used to be locked in a single, big database during the monolithic era of application architecture can now be spread all over the place. As long as references among the different locations of the data can be kept in sync (which is no small feat, but not exactly impossible), it opens up all kinds of possibilities.

From microservices to serverless

The next evolutionary step to serverless software architectures has two important pieces. First, instead of microservices running in daemons inside containers eating up memory and checking largely idle ports for API requests to service, business logic sits on disk until just the right moment dictates it gets loaded into a pre-warmed container that already has its language runtime ready to go. That approach enables load times in the hundreds if not tens of milliseconds, allowing a developer to take advantage of the strengths of containers without ever having to write a Dockerfile.

But what does “just the right moment” mean?

Traditional microservices use APIs as contracts between components, but the other distinguishing characteristic for Serverless architectures is that they use events. Take the canonical AWS Lambda example.

Until the image lands in the first S3 bucket, the Lambda function in question sits on disk and generates only storage charges. Upon placement of the file in S3, however, the event framework triggers the Lambda function into memory where it executes, outputs its thumbnails and shuts down.

The level of decoupling this enables is significant. The first S3 bucket knows nothing of the Lambda function that operates on its data, nor does it know anything about the second S3 bucket that houses the resulting thumbnails. A more complex chain could be easily constructed by putting a second trigger on the second S3 bucket that, for example, pushes the thumbnails out to a CDN, unbeknownst to the prior components of the chain.

The microscopic amount of state between pieces of business logic that relies on events to string together sequences is becoming commonplace. This structure enables a mixing and matching of components from different clouds because that lightweight state makes previous data gravity issues vanish. That hasn’t been possible before.

Vendor-independent triggers

The very clever folks over at The Serverless Framework have been making it easier to create not just functions on AWS Lambda, IBM OpenWhisk, Azure Functions, Google Cloud Functions and Kubeless, but the events that string them together. While a single Serverless Framework deployment cannot yet support multiple vendors, this summer they announced an Event Gateway that would make that possible by creating a single place to organize vendor-independent triggers.

So in that scenario presented earlier where writing a text file on AWS S3 that causes the Azure text-to-speech service to generate an MP3 file that gets written to IBM Bluemix object storage that hosts your website, something like Event Gateway could set the event triggers necessary to tie all that together, and the loose coupling would allow a developer to exchange the pieces as the public cloud arms race continues to escalate.

Summary

Numerous hybrid cloud studies, not just the IDC one cited earlier, tell us that companies aren’t choosing just one IaaS vendor. Instead, they’re choosing multiple, but they tend to think of the building blocks of choosing more than one as full applications. Really sophisticated shops are taking traditional three-tier applications and running the databases one place and the presentation tiers elsewhere, thereby creating hybrid applications.

With microservices as its backbone and advancements in both serverless and event handling, massively hybrid applications that mix the best components from multiple public clouds are now possible. Building applications in this way insulates an organization from being locked in to a single vendor and avoids the pitfalls of data gravity that handcuffed prior generations of applications by embracing lightweight state management. Instead, the model gets turned on its head and challenges the public cloud vendors to outperform each other with advanced services like speech-to-text (and back), AI, block chain and a wide variety of others while organizations creating this new breed of applications retain maximum flexibility and velocity with small, decoupled pieces of business logic.

Contributor

Pete Johnson is a Technical Solutions Architect at Cisco, covering cloud and serverless technologies. Prior to joining Cisco as part of the CliQr acquisition, Pete worked at Hewlett-Packard for 20 years where he served as the HP.com Chief Architect and as an inaugural member of the HP Cloud team.

The opinions expressed in this blog are those of Pete Johnson and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.