Product reviews - open source

REVIEW: RHEL 7 anchors enterprise-focused ecosystem

Latest version of Red Hat focuses on containerized instances of the OS.

Product reviews - open source

Show More

Red Hat Enterprise Linux 7 is more proof that operating systems aren’t dead, they’re becoming vessels for containerized applications. RHEL 7 performed well in our testing, but it’s worth noting that this no longer just a simple OS – it’s an increasingly abstracted component in the larger Red Hat ecosystem.

Although Red Hat took a long time between the RHEL 6 and 7 releases, during that period they’ve been sending numerous updates poised towards stable infrastructure.

Another key point -- Red Hat’s virt/virtualization core seems to have taken a back seat to LinuXContainers (LXC), as Red Hat has embraced and financially sponsored Docker containerizing components in a number of ways.

Red Hat has increased support for hosting Windows products in this release. For Red Hat, Microsoft is less of a target and more of a grumbling ally in terms of systems support. Overall support for hosting virtualized Windows Server Editions is less obscure than when we tested RHEL 6.

And with an updated SAMBA 4 (an SMB/CIFS/AD connector), RHEL 7 is an almost full member of a Microsoft network, where others, like Apple, are balkanized by lack of simple Windows compatibility in business turfs.

In testing, we found that the RHEL 7 OS stands alone well, plays well with others, and is progressively easier to both deploy and configure.

Red Hat has the onus of an OS subscription model that prevents instances from being out-of-support or highly re-configured — no interesting but unstable kernels allowed.

The hackability of this OS is poor, but hacking is what other Red Hat and Fedora derivatives are for. This one’s all about stable business deployments. The Red Hat core may be inviolate by policy, but for some systems personnel, better the devil you know, than the apt-get you don’t.

Like Canonical’s Ubuntu 14.04, RHEL 7 is a huge pile of code, running 4.3GB in the DVD ISO we downloaded of the RC candidate (minimum seems about 535MB).

Both the Red Hat and Ubuntu Server distros are poised towards application instance isolation methods that revolve around Linux containers (LXC). The pre-production use of Docker instance provisioning methods we described in our recent Ubuntu Server review allows developers to further the concept of workload isolation, similar to the way that Java and Davlik isolate workload instances on Android phones. Handily, Docker became “production” serendipitously with the release of RHEL 7.

Red Hat’s support of LXC is aimed at making it more attractive for users to deploy Type-2, OS-based virtualization, rather than Type-1 bare metal hypervisors. LXC allows the container to be both lightweight, yet highly isolating — or so we hope as containers haven’t been subjected to rigorous third-party security evaluation.

Making containerized workloads the basic/atomic unit that fits into what has become generic containers allows them to be liberated from a RHEL 7 instance, too. If all works well, moving a workload from one OS instance to another is transparent to the workload.

It then becomes the job of various relationship stack components — especially OpenStack -- to manage, deploy, secure, provision, and especially move workloads among hosts. This is also where Canonical and Red Hat start to look different in terms of organizational value.

As an example, Red Hat sponsors Project Atomic, which surrounds the containerizing logic of Docker. Once an app tests successfully as an isolatable container instance, it can become a package, much as software appliances (think: turnkeylinux.org appliances) are found and deployed.

Despite similar containerizing, Ubuntu Server exceeds Red Hat’s popularity as a spawned public cloud OS instance. Things may change if Red Hat is perceived as a more solid container.

The Major Changes

RHEL 7 now uses the xfs file system instead of ext4 by default. Support for xfs means that RHEL can handle extremely large file systems, as much as 8 exabytes – which is supported by no hardware we can imagine and therefore is theoretical. Red Hat’s support goes to only 500 petabytes apparently. We did not test either claim.

Although such file systems are thought to be gargantuan, there are 300 petabyte xfs file systems in production today. The xfs implementation allows RHEL 7 more favorable comparisons with Oracle/Sun’s ZFS filing system/volume management scheme, and Microsoft’s proprietary NTFS filing system, as well as VMware’s VMFS. The older yet mature ext4 can still be used, as can other filing systems.

Those desiring cross-platform compatibility with Windows Active Directory receive new Kerberos support that allows them to do this, subject to caveats regarding Kerberos time synching — and time-synching to the same host and time-zone adherence becomes a pre-requisite, we’ve found.

Identity management is cross-platform between the two. We linked our Red Hat identity to our in-house Windows 2012 R2 Active Directory domain without a hitch.

Much of this comes at the behest of RealmD, a system process that looks up resources via DNS. RealmD then offers linking options, one just for user proxy identity that lacks system policies and another method for direct identity control. It connects to the identity manager (idM) components of user resources in terms of policy management. This piece isn’t quite as easy as it sounds, but there can be a trust relationship between RHEL7 and Windows AD that’s comparatively mature and non-evasive.

Lacking is a methodology to allow Chromebook sessions from within a RHEL “terminal server” along the likes of what Citrix used to do for VDI sessions for Microsoft, although Ericom fills in some of the blanks in this sort of VDI model.

The RHEL 7 kernel update to 3.20 is similar to Canonical’s implementation, but in terms of distribution, we found RHEL7 slightly easier to deploy on bare metal, and about the same on VMware, Hyper-V, and Citrix XenServer. Deploying Windows seems better and more cogently supported in RHEL7, where it was painful on RHEL6. We’re not sure anyone actually does this. Or why.

Hands-on

Installation has changed, and very largely for the better. Bare-metal installations had no surprises. OpenStack installations were more sophisticated to initially orchestrate, but also worked without drama. We discerned that there is no good reason to use less than 4GB of memory for an instance. Boot can be accomplished more readily from alternative methods, including various boot-on-SAN schemes, as well as image deployments.

We tested upgrades, and upgraded from RHEL6.5 to RHEL7. Admittedly, our installation wasn’t very sophisticated, only an RHEL basic LAMP stack with some Tomcat and moldy test apps. Again, no drama.

We deployed in scratch form (from ISOs) on VMware 5.1, 5.5, Citrix XenCenter 6.2, and Microsoft Hyper-V3. Note that choosing defaults can likely jail an installer into a version of minimalism, meaning that unless instructed to do so, Red Hat installs the absolute bare minimums of functionality. We rather like the minimalism, but would have preferred to have been warned first.

At installation time, we could choose from several types of base environments, as Red Hat calls them: minimal, infrastructure server, file and print server, basic web server, virtualization host, and server with a GUI.

Each selection has a set of components to be chosen and you can’t click one button and get all options within a base installation. Each one must be actively chosen. We couldn’t drill down the options, either, which somewhat obscures choices. Further, a most minimal installation mandates further configuration steps to add the instance to the subscription manager (local, VAR, or Red-Hat hosted) site. Networking is not turned on by default in a minimal instance, either, leading to lots of merriment if you weren’t prepared for this. The PxE-boot answer file for remote provisioning of bare instances isn’t as long as prior editions.

Virtualization host also implies minimal configuration, as if it were to be used as a Type-1/bare metal hypervisor. It can be configured for use with NFS (an easy way to bulk-transport files and folders into/out from the instance), a remote management via OpenLMI, add development tools, smart card authentication, and useful libraries.

We used a bare metal installation to install each type (sometimes with low memory) allowing us lots of time to surf the Internet between re-installations. By default, when user space becomes available via appropriate selection, Red Hat’s server GUI boots to Gnome.

It’s not easy to stir-fry your own combination of items. We suggest using CentOS or Fedora for combinations that might be difficult or even bizarre to support, because Red Hat will do this, but it’s not going to be their core strength. Instead, the combinations form at least a premise for support to work, and also for dependencies to be known — instead of the infinite possibilities that Linux otherwise provides.

The largest common denominator of selection runtime choices became available by selecting the full set of Server GUI options. Most rational choices were available there. The installer UI uses a combination of screen info form acceptance at the lower right and upper left. The choice initially eluding us was: choosing a Done button; a strange UI outlay that made fools of us.

After selection choices, dependencies are checked. If there’s still a problem, you’re guided to go back into the selection that caused an exception to your choice of dependencies. It’s more beautiful than Canonical’s Ubuntu method, and more sophisticated, at the price of discernability.

In terms of performance, Gnome feels zippy, but the GUI-less server instances we deployed screamed. Optimization for varying roles is fairly well documented, and Red Hat can increase throughput through a new feature of network port teaming. InfiniBand is easily supported for those with drive arrays or SSD or hybrid conventional SSD/conventional cached arrays.

Red Hat’s initial discovery process found all of our multi-core/multi-processor beasts correctly. In fact, single core hardware platforms aren’t even supported at this point. Multi-core is your only choice. Same goes for 32-bit CPU families: not supported and likely for their memory address limitations and the excruciatingly slow pace set when we limited memory. Unlike Ubuntu, there are no RHEL 7 ARM CPU ports available at press time.

How We Tested Red Hat Enterprise Linux 7

We bought up RHEL7 RC ISOs from the Red Hat site. We then installed them in the lab onto virtual machines (not recommended unless you have enough memory) on Type 2 hypervisors (Parallels and VirtualBox), then VMware 5.1/5.5, Microsoft Hyper-V3, and an ancient Dell server bare metal 4-core server, all of which run on an GBE switched fabric using a simple D-Link GBE switch.

In the NOC hosted at Expedient/nFrame in Carmel, Ind., we deployed numerous instances via PxE into a Lenovo ThinkServer RD530, and then using OpenStack IceHouse components, into an HP DL-580 Gen8 machine running four processors with 15 cores each (192 glorious GB of memory). The NOC interconnects via GBE and 10Gigabit Ethernet into a SAN, and via Gigabit Ethernet to a core router.

We brought up each base installation type and experimented, noting base versions installed, which were up to date. We forced updates and were rewarded rarely, meaning that the ISOs we downloaded were amazingly up to date.

From Virtualization to Containers

If the change from 32- to 64-bit computing enabled more memory, and more memory enabled virtualization, then this harmony was somewhat static in terms of the relationship of re-representing core metal hardware features via hypervisors to subsequent instances of operating systems.

In turn, operating systems held applications, and the safety needed to isolate especially key systems or production applications warranted multiple instances of operating systems on a hypervisor.

The idea of containers, as expoused in LXC, is to remove much of the attachment of an application’s sockets to hardware, and thus the dependencies on things like storage, networking and so forth. It’s all about the workload.

Each server instance then becomes a multiple workload instance where the workloads are confined into containers. Containers in turn easily metaphorically into the trailers seen on cargo ships, walled next to each other, each actively doing something but otherwise oblivious to the outside world. The LXC instances can be further removed from OS resources that might be affected or hacked via SELinux user spaces, whose attachments from the operating system are further removed.

Container overhead is smaller than what’s used now when multiple independent virtualized instances of operating systems are spawned in an attempt to isolate workloads. OS and platform licensing costs drop, as the number of spawned instances are reduced, yet the containers are nearly at the point where they can be as easily manipulated as OS instances for load balancing, logical partitioning, and perhaps redundancy and safety.

What’s yet to be proven are the viability and interchangeability among platforms, as well as the overall effect of a reduced attack surface that containers provide.

Copyright © 2014 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022