Serverless computing: How did we get here? (Part 1)

mainframe servers in the cloud
Credit: Thinkstock

Learn how the physical server has been rendered obsolete.


Every once in a while I get asked what my background is, and my answer is always that I'm a generalist: a jack of all trades and master of some. Being a generalist is not easy in a world where IT professionals have been told since the beginning to specialize. However, it requires a generalist to see the forest for the trees, an analogy well-suited to the continually emerging world of cloud computing.

In 2001, I started my journey into the world of cloud computing. With a background in plant-floor automation and embedded systems, my point of view was admittedly skewed with a natural affinity for distributed computing. The hot topic at the time was grid computing, and as I was learning about it, I recognized a model that made sense: CPU scavenging. CPU scavenging virtualizes all the spare CPU cycles wasted on desktops and servers as they wait for something to do (when operating on a scale measured in billionths of a second, it turns out a lot of time is wasted waiting for something to do). However, CPU scavenging was viewed as pedestrian at best, not something a real IT person would use. How could such a model possibly work in the corporate IT world where big iron hidden in humongous data centers, designed like fallout shelters with power backups and oodles of bandwidth, did all the heavy lifting? It seemed a reasonable explanation, but when I tugged on the string and asked why a server was better than a desktop, conversation after conversation came to an abrupt end. I had to push further to better understand.

I started peeling the onion and realized there was a built-in positive bias for the word “server.” I could take any software application and if I said it was running on a server, there was instant interest. However, if I then explained the server was actually a desktop, the jibes would start. I realized there was an important gap in knowledge, an imperfect understanding of a server, which had somehow transformed (likely with the help of marketing) from a logical concept into a physical box. 

Our computing world is built on the client/server and n-tier logical model. When we translated these logical models into physical components, the term “server” tagged along becoming the moniker for any computer in a data center. Walk through a typical data center and people can point to http servers, application servers, database servers, integration servers, mail servers, etc. Walk through a cloud data center and the conversation changes dramatically. Instead of “there's the mail server,” it becomes “and there's a server, and another one, and another one.” With virtualization, nobody can definitively say what application is actually running on the hardware. For most people, the cloud is essentially a euphemism for “all that stuff” in a data center.

I, however, choose to disagree.

I believe cloud computing is about having the right resources in the right place at the right time; a data center is an arbitrary collection of resources, similar to a physical server. Cloud originated from the struggle to reach a global audience with reusable software running on more efficient hardware. We are now seeing that triumvirate of forces give way to a new set which makes our existing data center-centric model of cloud obsolete. The Internet of Things (IoT) extends the connectivity of the network to every device. Mobility, from smartphones to tablets and watches, has eliminated physical location as a barrier. Big data is enabling a whole new slate of data analysis not possible with other means. Each of these technologies is driving a new challenge that existing cloud implementations are unable to address, a data dilemma of gargantuan proportions.

Data is the oxygen of business growth. Whether Big Data, IoT or Mobility, the amount of data being created is increasing rapidly. However, moving data is expensive and slow, especially at the scale of terabytes and petabytes. Fierce competitors across multiple industries who are pushing the envelope of cloud are slowly realizing there simply isn't enough time available to:

  1. Capture data at the point of origination.
  2. Move the data across the country.
  3. Filter the data to focus on the most valuable elements.
  4. Combine the data with other data to broaden the perspective.
  5. Execute analytics on the data.
  6. Generate a result.
  7. Communicate the result back across the country.
  8. Leverage the result to drive some benefit.

Even if all the above could be completed within a second or two, the bandwidth costs alone are prohibitive. Clouds, as constructed today, don't solve this rapidly evolving problem because they're still physically rooted in data centers which are too far away and too expensive to get to. While we virtualized the physical server, we never got rid of the server itself. Blinded by convention, the most common architecture I see to address this data dilemma focuses on pushing the servers closer to the edge; a “last mile” approach. However, does creating smaller micro-data centers really solve the problem?

The only way to solve the problem is through massive distributed computing, where the concept of a server reverts back to the logical construct and the physical world becomes . In essence, it's a world of computing.

In Part 2, I'll be taking a deeper look into the vision and benefits of serverless computing and share how the market is steadily moving in that direction.

This article is published as part of the IDG Contributor Network. Want to Join?

Must read: Hidden Cause of Slow Internet and how to fix it
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies