Data centre trends that are defining the future

Written by Max Smolaks, News Editor at Datacenter Dynamics Published 2017-09-19 08:29:09

The data centre never stops evolving. Every technological breakthrough, every shift in consumer behaviour leaves a mark. And as we become more dependent on remote computers - for things like working and socialising, shopping, and entertainment - the facilities where those computers are located are becoming ever more critical. But does this rapid growth in demand mean that the planet will eventually be covered in data centres, with humanity forced to huddle in whatever space is left?

Not so: according to IDC, we have already seen the peak in the numbers of what could be considered 'traditional' data centres, and they have already started to dwindle. Instead of building new facilities, the focus now is on growing existing locations and squeezing maximum performance out of every square inch of space. More performance requires more power, more power requires more cooling - and that has serious implications for data centre designers, operators, and of course, customers.

Higher density

Power density is mostly defined by how much computing you can squeeze into a standard rack unit, and here we are seeing some incredible advances: Intel's latest Xeon SP family, launched in July, enables system builders to place up to eight CPU sockets on a single motherboard, with up to 28 full-feature cores each.

A similar picture under the hood of AMD's latest Epyc chips, available since March. These are built around four individual dies with up to eight cores each, so you're essentially getting the performance of four CPUs instead of one. It's fair to say that the competition between the two of the biggest names in chip-making is becoming more intense, perhaps for the first time since the glory days of Opteron, and both will push density forward.

Another cause for increased density is the commercialization of AI research. Deep learning was enabled by GPUs, thanks to their ever-increasing parallel processing powers, originally intended to make videogames look more realistic. In some deep learning calculations, GPUs outperform CPUs by a massive margin, and there are already specialized accelerators on the market. What all of this shiny new gear has in common is small form-factors and extremely high-power consumption.

Off-the-shelf or DIY

In the past five years, we've seen the emergence of hyper-converged infrastructure: hardware that enables customers to build their desired systems from preconfigured blocks, thoroughly tested and shipping with required software on board. HCI relies on virtualization and envisions most of the critical operations as code, not dedicated silicon. Such systems are currently offered by most major vendors including Dell EMC, HPE and Cisco. There are also a number of smaller, more agile companies that were born on the cusp of the hyper-converged revolution and managed to make a dent in the market, like Nutanix or Simplivity. Hyper-converged kit is much easier to manage than legacy IT, but the costs of this approach can be high.

On the other hand, more and more organizations experiment with 'white-box' servers and networking equipment, purchased directly from manufacturers (often located in Asia). Data centres built with unbranded or 'vanity-free' hardware, and either in-house or open source software, are much cheaper to populate, easy to upgrade or replace, but require lots of expertise to run properly - and don't come with ‘bulletproof’ SLAs offered by traditional vendors.

These two approaches are likely to define the contents of data centres moving forward. Small to medium-sized organizations will probably go the HCI route, investing in hardware instead of staff. Cloud operators and especially large online businesses will have to invest in staff and skills, while keeping their hardware costs minimal, and their designs - as simple as possible.

New kind of virtualization

On the software side, Docker-like application containers could gradually replace virtual machines. Apps in containers are very easy to scale up or down, thus optimizing the usage of compute, network and storage resources. In the data centre, this means greater insight into how resources are used, higher utilization and more accurate billing.

The edge

In the future, data centres will have to be distributed more evenly, since they will have to be located closer to the end-user in order to beat one of the greatest challenges facing modern online services – latency.

Applications that require an absolute minimum of delay include those related to 5G and driverless cars - and I think we can all agree that there has been a remarkable progress in this field in the past few years.

Facilities in large population centres stand to gain massively from this trend, they will never struggle with occupancy rates again. But the industry will also need to build small, weatherproof, automated data centres outside of big cities, build them quickly and at a low cost.

The weirdest data centre of this type has to be Microsoft's Project Natick. In 2015, a team at Microsoft Research built a small data center in a watertight steel container, christened it Leona Philpot, and then sunk it off the US Pacific coast for three months. The experiment proved successful - the vessel managed to serve commercial cloud workloads.

The engineers are already working on a bigger version. How's that for rapid evolution?

Related Articles