Building energy efficiency into data centres

Siemon Australia

By Alberto Zucchinali, Data Centre Solutions and Services Manager, EMEA, Siemon
Monday, 25 May, 2015


Building energy efficiency into data centres

Getting the physical design of a data centre right is the first step in reducing energy costs.

There are many important energy-saving considerations for those building a data centre. However, the key consideration is the choice of cabling architecture, followed by the choice of infrastructure equipment. The cabling architecture chosen will have a substantial impact on future choices and requires very careful evaluation. According to the Uptime Institute, poor data centre equipment layout choices can cut usability by 50%.

The right architecture

A major choice to begin with is deciding on an ‘any-to-all’ or ‘top of rack’ (ToR) configuration. An any-to-all structured cabling configuration, using a distribution area as outlined in TIA 942-A and ISO 24764, will allow you to place your servers anywhere that makes the most sense for space, power and cooling, as opposed to being restricted to adding it to a cabinet. The ability to place equipment where it makes most sense for power and cooling can save having to purchase additional PDU whips and, in some cases, supplemental or in-row cooling for hot spots.

In point-to-point configurations, placement choices may be restricted to cabinets where open switch ports exist in order to avoid additional switch purchases. This can lead to hot spots. Hot spots can have detrimental effect on neighbouring equipment within that same cooling zone. Hot spots can be reduced with an any-to-all structured cabling system by allowing equipment to be placed where it makes the most sense for power and cooling instead of being land-locked by ToR restrictions.

According to some real-world comparative designs, the structured cabling cost is roughly 10 to 15% of the cost as compared to the expense of additional switches required in a ToR topology - not to mention the unused ports, additional power and annual maintenance costs for the latter. Using structured cabling can provide zones with up to 100 m cabling channels, enabling greater flexibility in design versus point-to-point cable assemblies used in ToR configurations, which are typically limited to 10 metres or less.

With a ToR switch in every cabinet (or two for dual primary and secondary networks), the total number of switches depends on the total number of cabinets in the data centre, rather than on the actual number of switch ports needed to support the equipment. This can nearly double the number of switches and power supplies required, compared to structured cabling.

Unlike passive structured cabling, ToR switches require power and ongoing maintenance. For example, based on an actual 39-cabinet data centre using separate dual networks, the cost for equipment and maintenance for ToR is more than twice that of structured cabling.

According to the Uptime Institute, the failure rate for equipment in the top third of the rack is three times greater than that of equipment at the lower two-thirds. In a structured cabling system, the passive components (cabling) are placed in the upper position, leaving the cooler spaces below for the equipment. If a data centre does not have enough cooling for equipment, placing the switches in a ToR position may cause them to fail prematurely due to heat, as cold air supplied from under a raised floor will warm as it rises.

Thermally efficient layouts

In terms of design, using a hot aisle, cold aisle arrangement is the most energy-efficient layout. Cold aisle containment allows perimeter cooling to operate most efficiently by containing and isolating the cold air supply, thus reducing power costs and supporting lower power usage effectiveness (PUE) ratings. Alternatively, it can be applied to expand capacity to cool higher heat densities to maximise the use of existing data centre floor space: it offers a low-cost method to increase cooling capacity up to 13 kW per cabinet, without the need for additional cooling equipment.

By preventing the mix of hot and cold air, a containment system allows cooling systems to operate at higher temperatures, while still sufficiently and safely cooling the equipment to maximise performance and life expectancy. Higher temperatures reduce energy costs through lower fan speeds, higher chilled water temperatures and more frequent use of ‘free’ (ambient air) cooling. This provides additional capacity to cool greater heat densities with the existing cooling system, without investing in more costly supplemental cooling products.

While designing the layout, it is also important to consider the underfloor cabling. Attention should be paid to airflow, void space and capacity to accommodate growth, not only for the cable, but also for other underfloor systems such as power, chiller pipes, etc. When pathways and spaces are properly designed, the cable trays can act as a baffle to help maintain the cold air in the cold aisles, or channel the air. Problems occur when there is little or no planning for pathways. They can become overfilled as many years of abandoned cable clogs the pathways and air voids. When this happens a reverse vortex can be created, causing the underfloor void to pull air from the room rather than push cool air up to the equipment, which causes performance issues.

Just as underfloor provision requires attention, so too does use of overhead pathways. If using an overhead system, the pathways should be run so that they do not block the natural rise of heat from the rear of cabinets.

Thermally efficient equipment

Thermally efficient cabinets with zero-u cable management and patching space between bayed cabinets and at the end of row are specifically designed to control airflow to maximise cooling efficiency without sacrificing density of cabling and equipment. Cabinets designed to include proper cable management can improve overall airflow and cooling efficiency by removing cabling from the horizontal equipment mounting areas and away from equipment cooling fans. High-flow front and rear doors will facilitate good airflow to ensure proper hot aisle/cold aisle circulation. Optional accessories, such as roof-mounted cooling fans, brush guards, blanking panels and grommets promote proper airflow and temperature control.

Some cabinet manufacturers offer the option to use vertical exhaust ducts or ‘chimneys’ to passively direct the hot exhaust heat from active equipment into the return air space to increase HVAC efficiency.

Efficient operating conditions

Be cool, not cold: When defining the operating conditions within the data centre, determine whether you really need all your CRAC units on and the temperature at which they are set. CRAC units operate more efficiently and don’t wear out as fast when they are supplied warmer air. Find out the maximum operating temperature supported by your active electronics manufacturers; most will support higher temperatures than you may expect. Have a cooling assessment performed to determine if your CRAC units are fighting each other and to make sure the cool air is going exactly where you want it.

Measure and manage

Monitoring and supervision systems support frequent or real-time checking of data centre behaviour. This innovation offers the ability to measure and monitor efficiency and make adjustments based on automation, load and demand. The more automated (self-sensing) the system is, the better the energy savings will be.

Intelligent power distribution can provide valuable energy consumption data while reliably delivering power to critical IT equipment. Different options deliver real-time power information with varying degrees of intelligent functionality, ranging from basic metering to full management with multiple options based on the level of data and control requirements. The potential benefits derived from the adoption of intelligent power distribution include reduced energy costs, improved management and optimisation of power capacity, identification and prevention of potential problems to ensure uptime and efficiency in controlling power functions in order to resolve problems quickly.

Avoid unnecessary redundancy

Intelligent infrastructure management can provide an up-to-date record of the physical layer connections, allowing channels to be dynamically managed to assure full utilisation of switch ports. This will decrease the number of switches that need to be added and powered while keeping unused ports to a minimum. While this can be added to your infrastructure at a later date, it’s ideal to include this functionality from the outset so that good housekeeping and management can sustain the most efficient environment once in operation.

Conclusion

The founding principles for data centre design and build focus on firstly selecting the right infrastructure architecture, then choosing the most thermally efficient layout, cabinets and equipment, supported by systems that can monitor and dynamically manage the operation.

Working from the data centre cabling infrastructure up, the best designs offer flexibility and provide the best total cost of ownership. A structured cabling any-to-all design offers significant benefits vs a top of rack configuration including lower capital and operational expense through better utilisation of switch ports resulting in fewer switches and lower power and maintenance costs.

Several options exist for managing power and cooling in the data centre. Some efficient options to be considered include: cold aisle containment, thermal efficient cabinet designs with zero-u cable management space and chimney options. Intelligent power distribution can also play a major role in monitoring and controlling power use for a lower cost, greener environment.

Related Articles

AI tools leading the charge on net zero goals

There is a great opportunity to harness the potential of AI to help drive progress in the race...

How can Australia become an AI leader?

Australia is currently an AI innovator, scoring highly on AI maturity in a recent IDC study.

Futureproofing IT: why observability matters in the hybrid age

While hybrid cloud is becoming increasingly commonplace in the IT industry, on-premises...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd