Red Hat is contributing its Red Hat Storage Hadoop plug-in to the Apache Hadoop open community to help transform Red Hat Storage into a fully-supported, Hadoop-compatible file system for Big Data environments.
The move came as Red Hat put more flesh on the bones of its expanding enterprise storage and Big Data management strategy for the cloud.
[ALSO: 7 steps to Big Data success]
Red Hat said, "We are working with the open cloud community to support Big Data customers. Many enterprises worldwide use public cloud infrastructure, such as Amazon Web Services, for the development, proof-of-concept and pre-production phases of their Big Data projects.
"The workloads are then moved to their private clouds to scale up the analytics with the larger data set. An open hybrid cloud environment enables enterprises to transfer workloads from the public cloud into their private cloud without the need to re-tool their applications."
Red Hat said being involved in the open cloud community through projects like OpenStack and OpenShift Origin would "help meet enterprise Big Data expectations".
Red Hat Storage, built on the Red Hat Enterprise Linux operating system and the GlusterFS distributed file system, sees Red Hat Storage servers being used to pool inexpensive commodity servers to provide cost-effective, scalable and reliable storage systems for Big Data.
Red Hat intends to make its Hadoop plug-in for Red Hat Storage available to the Hadoop community later this year. Currently in technology preview, the plug-in provides a new storage option for enterprise Hadoop deployments, "that delivers enterprise storage features while maintaining the API compatibility and local data access the Hadoop community expects", said Red Hat.
This story, "Red Hat contributes Hadoop plug-in for cloud Big Data projects" was originally published by Computerworld UK .