High Density Computing

High Density Computing

High density computing is a broad term, covering all manners of IT equipment from blade systems to supercomputers. Its meaning has evolved through the years: what was considered high density a decade ago barely qualifies as a server today.

Written by Max Smolaks, News Editor at Datacenter Dynamics Published Thursday, 04 May 2017 08:53

High density computing is a broad term, covering all manners of IT equipment from blade systems to supercomputers. Its meaning has evolved through the years: what was considered high density a decade ago barely qualifies as a server today.

In a modern enterprise data center, high density usually means 10-20kW of power consumption per rack, rather than 3-5kW required for a common stack of servers. The additional power is used to run expensive machines with an abnormally high number of CPU sockets and GPUs, required to chew through certain types of workloads.

Classic examples of such work include scientific simulations, 3D rendering and seismic data processing. However, high density computing shouldn't be pigeonholed as a tool for science - it has been finding plenty of adoption among enterprise users, offering a cost-effective platform for virtual desktops, among other things.

Machine learning

One reason for the popularity of high density is the emerging use cases that rely heavily on GPU performance, like machine learning.

Machine learning requires developers to either design or 'train' algorithms without explicitly stating the rules that must be obeyed. Such algorithms are then unleashed onto Terabytes of data to find hidden insights and meaningful anomalies. Deep learning goes further - it creates artificial neural networks that attempt to model the way we process sight and sound in order to enable computers to recognize pictures and human speech.

Such networks depend on complex matrix and vector computations that are the domain of GPUs, which have been previously drawing polygons in the service of PC gamers. While a mainstream Intel processor might have up to 24 cores, modern GPUs have thousands, making them a perfect tool for deep learning. Nvidia claims that for machine learning workloads, its Tesla M40 can deliver eight times more compute than traditional high performance.

Small but mighty

Another reason for the growing popularity of high density systems is the fact that they simply pack more resources into a smaller footprint, making them an attractive option when looking to save space and consolidate sprawling server farms.

If you are a data center operator, high density systems allow you to maximize profits from your data halls, as long as the utility power supply and cooling capability is up to the task. The most expensive resource of a data center is its square footage, and ideally the savings should be passed back to the customer.

The desire to improve space utilization has also given us new breeds of hardware like blade servers, which only deliver on their potential when packed in tight clusters - something that simply can't be achieved in a traditional data center.


Since high density systems consume more power, they also require more cooling per square foot, and these two considerations will drive the design of a data center’s space, making it suitable for some of some the most demanding machines on the server floor.

Increasing available power to the rack means some components of the power chain need to scale up: this concerns uninterruptible power supplies and their batteries, but not PDUs. A larger operation would also require massive generators or a reliable secondary power feed.

Traditional cooling methods are less effective when talking about high density computing. It's simply impossible to make existing equipment remove two to four times more heat than it normally handles, calling for either additional in-row cooling, clever containment strategies or refrigerated doors. Liquid cooling works wonders for high density hardware, but despite sound theory and solid economic argument, it is yet to find mainstream adoption.

Whereas traditional data centers are likely to adopt some form of cold aisle containment - feeding cool air directly to the cabinets - data centers designed for high density equipment sometimes choose to flood entire rooms with cool air and focus on extracting heat. Experience shows that this approach improves cooling efficiency under high load, but it requires a substantial investment in equipment and doesn't work well in older buildings. More cooling requires more power, which means cooling becomes more expensive.

Smaller data halls designed specifically for high density computing might offset this by raising the overall operating temperature - high density gear operates well at higher temperatures as it is designed to do so. In this case, more attention needs to be paid to the air flow - you don't want hot spots if the temperature is already up.

Some of the requirements for high density computing are less obvious, like the strength of the floor - will it be able to take the weight of a full rack? Some of the less obvious benefits of high density approach include reduced number of cables..

At the core of it, high density calls out to a primal emotion that's inside all engineers - the desire to run the biggest, fastest and most powerful machines around. And if that somehow makes commercial sense, all the better.


More in this category: Data Never Sleeps »