The agent of change

Written by Max Smolaks, News Editor at Datacenter Dynamics Published 2018-03-19 11:15:11

What's happening next in data centres

The pace of change is accelerating. And it seems our ability to absorb new products and services is adjusting to keep up. Summoning Internet cars to do our bidding is already part of the daily routine. How about turning any property owner into an hotelier? Done. Digital money not backed by any country? Take your pick. Drone racing? There are several competing global events. And playing video games is now a profession.

3GPP, the industry organization responsible for wireless communications standards, had to accelerate its schedule and release an intermediary specification for 5G, because both network operators and equipment vendors are chomping at the bit: they have equipment and processes all worked out, all they need is an official endorsement.

And every time a new technology comes along, it leads to changes in the data centre – the building or room that makes it all happen. If you’re interested in this subject, you’ve likely heard countless predictions before. So, in this post I’ve tried to list some of the less obvious trends that are going to have an impact on the corporate data centre – whether it’s located on company premises, in a colocation facility, or in a massive server farm as part of a public cloud service.

The zoo gets bigger

For decades, compute workloads used to rely on the holy trinity of the CPU, memory and storage. In the past few years, the advent of machine learning and specialist tasks like image recognition has led to the arrival of a new kind of silicon device, the GPU – a massively parallel processor previously debased in the service of the video game industry.

Recently, yet another type of silicon has appeared in the data centre: the field-programmable gate array (FPGA). In a nutshell, FPGAs work alongside CPUs to make certain pre-defined tasks run a lot quicker – think everything from database acceleration to video encoding to network routing and switching.

In certain scenarios, FPGAs perform much better than GPUs, and for this reason they are coming to the data centre. The fact hasn’t been lost on Intel, so the company has been wheeling out new FPGA designs on an almost monthly basis.

AWS started offering access to cloud-based FPGAs from Xilinx almost a year ago, while Google went as far as developing its own class of competing devices – Tensor Processing Units, or TPUs.

HPC for the masses

The world of high performance computing (HPC) used to revolve around scientific endeavor and government funding. Now, the lines between what’s considered HPC infrastructure and your average enterprise server rack have blurred: Tianhe-2 , the world’s second fastest supercomputer is built with Xeon E5-2692v2, the chip anyone can buy online for $2,000.

It turns out that many of the new enterprise workloads need performance, and not just scale – hence the rapid growth in power density of IT and the sudden interest in all matters related to liquid cooling. The market has rejected the idea of micro-servers packing high numbers of relatively underpowered CPUs –we’re back to the ethos of big, hot and power-hungry chips that get the job done in the least amount of time.

Death for some

The all-flash data centre is still a distant dream, but there’s already a class of storage devices that has been eliminated by advances in NAND design – the once-mighty 15,000 RPM hard drive. Intricate machines that spin disks of metal at ridiculous speeds are unnecessary, when the same job can be done by electrons moving in a slab of silicon, both faster and at a lower cost.

Traditional hard drives built for capacity will be with us for a while, but at this stage the benefits of flash memory are impossible to ignore.

Power to the people

One area of the data centre that’s been resistant to change is the power distribution chain. But there are signs that data centres, being located along major power lines, are ready to take a more active role in regulation of the electricity grid. Two methods for using data centre battery systems to support the grid are gaining traction: peak shaving and frequency containment reserve. Both can serve as additional sources of revenue for data centre operators.

At the same time, batteries themselves are changing: Li-Ion cells are now a common sight in the data centre, and a new contender, using Nickel-Zinc chemistry, has recently appeared on the market. And hey, supercapacitors are also very, very cool.

Meanwhile, some companies have been exploring the use of different types of fuel cells to completely liberate themselves from the grid.

Don’t call me legacy

The death of mainframes has been predicted for a while, same for the death of tape storage. There are fantastic alternatives. And yet, both manage to survive and thrive in the constantly changing landscape. For this reason, both tape and mainframes will likely exist forever. You can’t kill them. They are immortal.

Everybody hybrid

People with deeply seated prejudices against public infrastructure (including myself) need to let those go. Today, as long as you have the right skills, public cloud can be every bit as reliable and secure as any in-house data centre. It’s not the right destination for every workload, and it might not necessarily save you money, but you can’t beat a bit of public cloud for convenience and flexibility. Tapping into this flexibility while keeping crown jewels within easy reach seems to be a recipe for success.

Related Articles