Chapter 5: Overhead or Under-Floor Installation?

Cisco Press

1 2 3 4 5 Page 2
Page 2 of 5

These classifications apply to both structured cabling and to patch cords. Plenum and low smoke/zero halogen cables fall into the same general category of cabling, but because each has different properties, one type of cable cannot be directly substituted for one another. Their usage or prohibition in different parts of the world are based upon how local countries measure toxicity. Specify in your design package the appropriate cabling type to be used in your Data Center's air distribution spaces—plenum or low smoke/zero halogen—based upon regional building codes.

In specific situations, you may be able to reduce costs by strategically using non-plenum cables, which are less expensive, in chosen locations. If you route structured cabling in a cable tray or ladder system above the server rows but below the ceiling plenum, for example, it is probably permissible to use non-plenum cabling. You can also use non-plenum patch cords for connections that don't go above the Data Center ceiling or below its raised floor, such as direct connections from a server to peripheral equipment within the same cabinet.

Such scenarios should be attempted sparingly, however. They require you to micromanage how patch cords are used to ensure that non-plenum cables are not routed where they are not enabled and that more expensive plenum or low smoke/zero halogen cables are not wasted in Data Center locations where they don't need to be. In a large server environment, the cost savings probably are not worth the logistical challenges. It might be worth attempting in a small Data Center, perhaps one with one or two server rows, or in one that is intended to be temporary.


Note - When you install plenum-rated or low smoke/zero halogen structured cabling in your Data Center plenum, make sure that other supporting components used in your cabling infrastructure, such as cable ties or patch cords, are similarly rated. They, too, can be a source of toxic fumes if burned.


The regional fire marshal is usually the ultimate authority over what type of cabling can be used in an installation. In some instances, they may require the use of plenum cabling in a space typically considered a non-plenum environment. This most often occurs in schools, but be aware that it might happen to your Data Center project as well.

Ceiling Components

There are several elements involved in an overhead infrastructure system. Structured cabling and electrical conduits are typically installed above a Data Center ceiling, supported and routed by cable trays or ladder racks. Cable tray is a shallow basket made of crossed metal bars. Ladder rack is a narrow ladder frame that is also made of metal and installed horizontally. Both items are secured by brackets to the Data Center's true ceiling and configured along whatever paths you want your infrastructure to follow. Electrical conduits and data cabling are then placed atop them. Cable trays and ladder racks have gaps between their metal bars, enabling air to flow through wherever cords and cables aren't gathered together to restrict it.

The data ports and power receptacles that the infrastructure terminates into are normally housed in metal raceways that are secured to the ceiling and suspended overhead. The raceways help organize the infrastructure, enabling it to be clustered above each server cabinet location. An alternate approach is secured vertical posts through the ceiling above each cabinet location and individual power outlets and data faceplates mounted directly to those. This practice is more commonly done in lab environments.

Be aware that fire codes in many regions require an 18- or 19.7-inch (45.7- or 50-centimeter) gap between automatic sprinkler heads, most often installed in the suspended ceiling, and any solid or opaque objects. In a Data Center, the fire codes generally apply to raceways or ladder racks filled with cables, server cabinets, infrastructure equipment, or boxes. Make sure that your cable management components are installed to provide sufficient clearance. Don't place them too low, however, as you want to be able to reposition server cabinets in the room without worrying about jostling any overhead raceways.


Note - Many of the Data Centers I manage possess four-post server cabinets that are open at the top, bottom, and sides, rather than solid. (This style of cabinet is shown in all of the figures in this chapter.) Inspectors in some cities have enabled these cabinets to intrude slightly into the clearance area, ruling that they only present an opaque surface at the highest point at which a server can be installed in them, which is a few inches (several centimeters) below their open top. While I don't recommend trying to hedge any codes, if you are designing a server environment in a building with limited ceiling heights, you may want to request clarification from local inspectors about how close open-top cabinets may come to the ceiling. Using them in your Data Center might enable you to gain a little more space.


You might be tempted to terminate data ports and power receptacles directly into the ceiling tiles, mounting them flush above the server cabinet locations and skipping the use of raceways. While faceplates installed in this manner typically must be secured to the overhead ceiling deck, to keep weight off the tiles, this configuration still causes patch cords to constantly pull on the termination points for the structured cabling. With no strain relief, the weight of the cords can damage this cabling over time. Mounting infrastructure against the ceiling also shifts the location of the data ports and power receptacles higher above the server cabinets. This makes it harder to reach and plug in to the infrastructure and requires longer patch cords and electrical cords.

Figures 5-1 and 5-2 show a sample termination of power and data cabling into back-to-back raceways, above a Data Center's server cabinet locations.

Figure 5-1

Overhead Termination Example—Front View

Note that the front of the server cabinets faces the same direction as the electrical outlets overhead, while the back of the cabinets faces the same way as the data ports. Orienting the raceways so that the data ports appear above the back of the server cabinet locations enables patch cords to directly connect to them from the back of any servers installed within a cabinet. Unfortunately, the electrical cords for the cabinet's power strips can't connect to the overhead power outlets without first being threaded through to the front side of the cabinet. This is admittedly awkward. It is an inescapable fact of having overhead infrastructure that Data Center users have to plug in most patch cords and even some power cables well above their heads—usually at least 8 feet (2.4 meters) off the ground. That means repeatedly climbing a stepladder. It is fairly straightforward to make connections when cabling runs directly up the back side of the cabinet to the raceway on the same side, but much more challenging when you have to reach around to the raceway that faces front.

Because power and data must be separated, this awkward routing is nearly unavoidable. Reversing the position of the raceways would enable the server cabinet power strips to plug in easily but then require the patch cords to be on the opposite side of the raceway with the data ports. Because there are usually more patch cords that exit a server cabinet location than electrical cords, thanks to the presence of cabinet power strips, this is an even less convenient setup.

Figure 5-2

Overhead Termination Example—Back View

One solution would be to redesign the server cabinets or change the orientation of how devices are installed into them, so that the power and data cabling exit on opposite ends of the cabinet—front and back—just as they appear on the overhead raceways. This is a not a standard configuration, however, and would need to be examined closely to make sure that installed servers could still easily plug in to the cabinet power strips.

Raised Floor Components

If you choose to use a raised floor system in your Data Center, there are several elements that you must specify as part of its design. These include:

  • Floor height

  • Mechanisms for bringing in equipment

  • Weight-bearing capacity

  • Types and numbers of floor tiles

  • Instructions for terminating infrastructure

  • Other subfloor details

Floor Height

Data Center raised floors vary significantly in height from one facility to another. The ideal elevation for your particular floor depends upon several factors:

  • Size and shape of the server environment

  • How much equipment it contains

  • How much cold air you want to channel in to the space

  • How much infrastructure is routed under the floor

Except for that last detail, all of the factors are tied directly to the use of the under-floor area for cooling. Once the floor is tall enough to clear whatever infrastructure you route along the subfloor, the rest of the space is really only necessary for air circulation.

So, what's the ideal floor height for your particular Data Center? You can pay a cooling engineer to calculate an optimal height, but the general principle is simple: the greater the height of your raised floor, the more air that can be circulated through that space. A taller raised floor means that more chilled air can collect and pass through. The greater the volume of chilled air, the more effect it has when it is then channeled above the floor. Also, air obviously flows more freely through a 24-inch (61 centimeter) cavity than one half the size, especially if there are electrical conduits, structured cabling, or other items running through the area that act as barriers.

While every server environment is different, 18- and 24-inch (45.7 and 61 centimeter) raised floors are very common and may be considered default heights for most Data Center designs. This distance enables an ample volume of air to pass through the plenum, even with a large amount of infrastructure routed through the space, but isn't so deep that someone lifting a floor tile cannot easily reach the subfloor. This is helpful for contractors installing infrastructure under the raised floor and for Data Center users who later use that infrastructure.


Note - In late 1998, I refurbished a server environment in Dallas, Texas. The pre-existing room was about 850 square feet (79 square meters). It had a raised floor, but only in the most generous use of the term. The floor was just 4 inches (10.2 centimeters) high, barely tall enough to route infrastructure under the floor. Power cables from server cabinet power strips had to be carefully threaded so as to not bend too sharply when plugged into under-floor receptacles. These outlets often tipped on to their sides after a floor tile was removed, and it was impossible to replace the panel without first rotating the receptacle upright again and carefully tucking power cables out of harm's way. The plenum was considered too small to channel cooling through, so the server environment's air handlers were configured to circulate air above the ceiling. Fortunately, this room was a temporary space. Within 18 months, all of its servers and networking devices were relocated to a more properly designed Data Center.

I used the under-floor space because it was there, but in my opinion such a small raised floor is useless. Putting any infrastructure under such a short floor inhibits airflow, and connecting to under-floor data ports or power receptacles is awkward and tedious and has the potential to damage patch cords or power cables. Even without placing any infrastructure in that space, I'm skeptical that the under-floor air volume can effectively cool even a moderately sized Data Center. If you are not going to have a raised floor that is at least 12 inches (30.5 centimeters) high, I say don't bother having one at all. A small raised floor might make your server environment look neater, but it is not worth the price tag and other drawbacks that come with it.


If you have the building space to spare and you want vendors to work on under-floor infrastructure without actually entering your Data Center space, you may consider a very tall raised floor—high enough for workers to stand upright and install, remove, or test structured cabling or electrical conduits. For this design, cable trays or some other management systems are needed to elevate the infrastructure so that it is within easy reach from the top of the raised floor as well as from the work area. Lighting is also necessary for this space, so that workers are not forced to rely on flashlights or other portable light sources. As with the innovative idea of installing air handlers in a secure corridor adjacent to the Data Center, mentioned in Chapter 4, "Laying Out the Data Center," this is a commendable design idea that is only rarely implemented due to the additional space and money it requires.

Even if you don't make your raised floor tall enough to walk in to, be aware that as the height of your raised floor increases additional air handlers may be needed to cool and circulate the increased volume of air.

In conjunction with calculations about the height of the raised floor, you must choose whether the top surface of your Data Center floor is to be elevated or at the same level as other (non-raised) floors in the building. The simplest and therefore most common option is having the raised-floor height elevated. This enables the bottom level of the floor—the concrete—to be one consistent level throughout the building. It requires the use of a lift or ramp to bring equipment onto the Data Center, however.

Related:
1 2 3 4 5 Page 2
Page 2 of 5
The 10 most powerful companies in enterprise networking 2022