This column is available in a weekly newsletter called IT Best Practices. Click here to subscribe.
Companies are moving applications to the cloud because they want a more flexible and dynamic computing infrastructure. Still, many organizations hesitate to migrate their most important applications to a public cloud for various reasons.
First and foremost is security. Because the public cloud's infrastructure isn't under the enterprise's control, some organizations can't use it. For example, banks and other security conscious companies eschew the cloud because the security aspects aren't totally under their control.
A second problem is predictable and consistent performance. The cloud may not be suitable for big transactional databases, and many companies believe they still need to operate their own big iron to run those applications. Another concern is less tangible but no less real. It's that feeling of discomfort that if something goes wrong in the public cloud, can they track it back to find out the exact source of the problem? It again comes back to a lack of control and visibility.
Bracket Computing has an innovative new concept that is designed to make the public cloud consumable for wary enterprise organizations. Calling it the world's first cloud virtualization system, Bracket claims to have built a software computing architecture that allows enterprises to harness the scale, elasticity and efficiency of the public cloud while maintaining the security, performance and control of a dedicated hardware data center.
Bracket's solution is unlike anything I’ve seen. It's a virtualization layer that sits on top of hyper scale clouds like those from Amazon and Google and fits between the guest operating system and the resources of the data center. Here's how Bracket explains the concept.
The first generation of virtualization was a hypervisor that fit between the server hardware and the guest operating system. Fundamentally, it separates a workload, or application, from capacity, which is on the server. It provides portability and allows users to move applications around. Bracket Computing does the same thing except it is for an entire data center.
In the Bracket model, virtualization technology creates an environment called a Computing Cell. Inside the Computing Cell is Bracket’s patented virtualization technology, which creates a lightweight virtualization layer that allows the Computing Cell to span multiple public clouds. This layer is also where the necessary enterprise security, storage, compute and network controls reside. All of the applications, along with the data that is powered by those applications, also live in this Computing Cell construct. What's more, everything within this cell, and the way that Bracket defines the perimeter and protects the data, is encrypted all the time. The cell can be loaded on top of multiple public clouds, or moved from one cloud to another, with consistent performance, consistent security, and consistent service level agreements.
During development the company found that the problems of enterprise computing could not be attacked piecemeal. Consequently Bracket has built security, storage and virtualization as a single architecture. This systems approach also allows third-party components to work seamlessly alongside Bracket’s.
Within this innovative architecture, Bracket has developed a unique storage system that yields very high performance, data integrity and availability, integrated with a state-of-the-art multi-key encryption system. All of the components are transparent and tightly integrated in the virtualization layer so that everything works in sync.
Bracket refers to itself as "the Nth data center" because if an organization has two data centers, the Computing Cell just looks like a third. The company has done a lot of work to fit into enterprises' existing orchestration, management, monitoring and provisioning tools and to look and operate just like another data center.
By virtualizing the cloud, Bracket takes away the issues that are created by the fact that in the cloud, a company doesn't control the hypervisor. Whether it's Amazon, Google or some other service provider, the cloud operator decides which hypervisor it is going to use to provide the virtual infrastructure. By inserting the Bracket virtualization layer into this mix, the organization takes back control into its own hands.
When an administrator logs into the Bracket Computing portal, they see a dashboard that gives them a snapshot that encapsulates everything that is going on in their environment: how many workloads are running, how many encrypted volumes are active, the budget that is available for users to deploy workloads in their environment, the number of active users, and so forth. This captures a set of controls and governance that an administrator can exert over their Bracket Computing environment.
At the same time Bracket enables flexibility and agility for IT end users. This is done through workload templates. An administrator can create a template and configure a set of compute and storage resources for a particular type of application. For example, the company can have general purpose, compute-optimized, or high memory instance types, depending on their particular application requirements.
Bracket takes all of the controls that an enterprise typically wants in a data center and reduces them to service level objectives. A customer uses sliders on the dashboard to define the service levels they are looking for and then Bracket automatically configures their environment to the service levels. This is called Managed by Directive. The enterprise can change those levels at any time – even with the application running – and have Bracket reset the service levels to where they need to be. An end user doesn't need to think about the underlying resources, he just needs to set the sliders to the service levels he wants.
In development for three years, the Bracket virtual cloud has been publicly available since the second half of 2014. Although the company is still young, Bracket has some fairly large customers that are in the process of deploying the solution at a large scale.