Power density – the real benchmark of a data centre

Written by Max Smolaks, News Editor, Datacenter Dynamics Published 2019-09-13 08:24:37

The digital universe is expanding, smartphones are becoming smarter, even the tardiest businesses are moving online, video games are replacing sports – and all the while, data centre power density keeps rising.

Power density is the metric that usually refers to the power draw of a single, fully populated server rack, as measured in kilowatts. And it is rising because we want to cram more chips into the same amount of space, to do more of the things we like.

A decade ago, average power densities hovered around 4-5kW. Today, according to AFCOM (via Data Center Frontier) most racks run at 7-10kW, at least in the US.

And the extremes are getting more, well, extreme: according to the numbers from Uptime Institute, back in 2012, the highest density racks the organization could find were consuming 26kW. Last year, ten per cent of respondents to its data centre survey reported that they were running some of their racks at above 40kW.

Why is this important? Power density has a very serious impact on the choice and configuration of data centre power and cooling equipment, and thus the design of the facility as a whole.

Of course, Individual server CPU consumption has been growing over time, as we moved from single-core designs to a whopping 56 cores on Intel’s beefiest Xeon to date. More powerful CPUs require more memory, and we are expecting DDR5 by the end of the year, with double the capacity of DDR4. Modern workloads also need more storage – and the drives need to be powered too.

Blade servers, designed to minimize the use of physical space, have become fairly common, and are offered by all of the major hardware vendors, including Dell, HPE, IBM and Cisco. These can pack hundreds of individual machines into a single rack.

Meanwhile, machine learning workloads – while remaining an exotic use case – require dozens of GPUs arranged in tight banks. Every GPU has thousands of tiny cores that need to be supplied with power, and, on average, a GPU will need more power than a CPU.

These chips convert lots of power into lots of heat, and all of that heat needs to be removed – so a data centres that support higher density require a larger investment in cooling. Precision air cooling remains the most popular method of ejecting tons of heat, according to Uptime.

Sure, you could opt to raise the average temperature of the data centre instead. The most recent ASHRAE guidelines suggest that technically, you can run a facility at up to 45°C, since modern equipment is a lot harder than it was  - but no colocation provider will ever agree to this, since they have to host a wide variety of kit including vintage specimens that will not survive this kind of treatment.

High density racks are also heavier, which means they put more pressure on the server room floor – good news if the rack is standing on a concrete slab, bad news if it’s on ancient floor tiles.

Taking all of the above into account, data centres have to be architecture for high density from the beginning. Retrofitting an old facility to deliver more power to the rack is possible, but retrofitting data centres is a real pain in the proverbial. At the same time, overestimating your average power consumption when designing a shiny new data centre risks resulting in stranded power or cooling.

Luckily, some of the recent trends in data centre design make housing high density equipment much easier, now, and in the future.

One is the aforementioned trend towards using slab flooring – no hyperscale data centres are built on tiles, and the lessons of hyperscale are currently filtering down to the rest of the industry, along with white box servers, and the general “cattle, not pets” attitude.

Another boon for high density is immersion cooling – in which servers are sunk in large, horizontal tubs of dielectric fluid, and can be deployed almost anywhere – no dedicated server room required. Some of the immersion-cooled systems already on the market promise support for up to 100kW in a single unit, but these are still a rare sight; anecdotal evidence suggests they are a nightmare to insure.

Meanwhile, prefabricated data centres ease the pain of retrofitting an older facility for higher density – and some are already equipped with precision air cooling.

Nobody is building high density data centres for fun – such facilities simply help you to get more work done, while saving on the expensive square feet. Power density needs to increase. Otherwise at some point, the entire planet could be covered in servers.