Does 'share-nothing' NoSQL signal the end to system resources sharing?

NoSQL’s 'share-nothing' philosophy seems at odds with the explosive growth and acceptance of Linux containers that share resources on the same host

Does 'share-nothing' NoSQL signal the end to system resources sharing?
Credit: Thinkstock

Driven by the need to partition databases into independent data sets to facilitate concurrent data access, NoSQL databases have been at the forefront of the “share-nothing” resource movement. But if NoSQL’s share-nothing philosophy is correct, then how do you explain the explosive growth and acceptance of Linux containers that share resources on the same host and the clusters and data center operating systems that run over them?

On the surface, these two movements appear to be at odds, but a deeper look shows merits for both.

+ Also on Network World: Containers: Most developers still don’t understand how to use them +

The popularity of containers reflects the dire need for more granular and lightweight sharing of scarce or expensive server resources. It shifts the idea of sharing from a coarse-level sharing applied to physical servers by forming multiple virtual machines (VMs) to a more fine-grained sharing by administering, monitoring and controlling shared resources consumed by each application within a separate container-execution envelope.

Containers also facilitate the deployment of cloud-native microservices, with container virtual networking linking containers together in support of a large number of containers within a cloud data center footprint.

There is one catch to the container’s lightweight resource-sharing approach, however. It focuses on optimizing the total performance in aggregate on a server using a resource-ration scheme at container configuration time. That approach guarantees fair sharing of resources among containers, but it falls short of dynamically adapting the resource needs for invidudal processes and threads within a container.

Idealistically, with SR-IOV-capable devices such as software-defined Ethernet and storage controllers achieving mass deployment, real-time resource adapation for processes and threads running within a container is now technically feasible.  This refined capability for containers can be thought of as a step toward creating an execution environment that is both application centric and lightweight.

Clustered applications take this discussion to the next level—from an intra-server debate to an inter-server issue. Containers and their clusters look at servers as on-demand resources to create more stateless container instances as demand dictates, while leaving the issue of inter-container communication and inter-server resource-sharing entirely in the hands of the applications.

A case in point: NoSQL gains performance by partitioning their datasets into parallel partitions, each operating automously without sharing or interference. That is great, but having multiple data partitions means updates to one partition now need to be replicated to other partititions to keep multiple partitions in sync. That means an increase in east-west traffic moving data across containers.

The nature of a clustered platform also means a lot of cross-monitoring of the well being of each node in a cluster, often referred to as “keep-alive messages.” The share-nothing approach at the per-node level often results in an increase in cluster-level activities; as east-west traffic increases, so does the importance of inter-node networking latency and throughput.

Intra-server resource sharing

Let’s look at intra-server resource sharing first. We would all hope that intra-server resource sharing is the ideal approach, but we all know that Linux SMP overheads have been gradually increasing from rev to rev, all under the name of resource sharing optimization.

This trend is at odds with the fact that certain in-server component resources have advanced to the level that there are smart resources within a server that can run largely independently. CPU cores, for example, are definitely parallel resources by that definition. SR-IOV-capable devices are another. It seems like these parallel component resources could be used in some way to benefit certain high-performance applications without inhibiting the desire of other applications that prioritize resource sharing over performance.

What we need is an application virtualization sub-framework that can seamlessly co-exist with containers, VMs or both. This application-centric framework should be more closely coupled with SR-IOV and be implemented as a user space technology without any new functional dependency with the hypervisor or Linux OS they run on.

Inter-server resource sharing

As for inter-server resource sharing, the state of containers and clusters largely reflect their origin. That is, containers were first created for cloud-native web services that focus on creating abstractions for resources that can be spawned on demand and de-coupled from each other to remove complex inter-dependency.

While those are definitely worthy goals, they fall short in becoming a foundation for all cloud-native distributed applications and services. That's because many distributed applications are partitioned into functions distributed across multiple servers where inter-server resource sharing and synchronization are key. As mentioned, NoSQL databases are a case in point, as is Network Function Virtualization (NFV) and big data processing.

So, we are at this interesting crossroads where the concept of containerization of applications is to allow for elasticity and effective resource sharing across servers, while NoSQL and other related “No Share” applications demand more local performance.

To enable true cloud-native application scale-out—for container encapsulation of both application classes—we need a better approach to enable intra-server high performance applications and a better approach to support inter-server distributed applications. This is possible and could be a reality in the very near future.

This article is published as part of the IDG Contributor Network. Want to Join?

To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.