Insufficient Data

Insufficient Data

Written by Phil Alsop, Editor, DCS Europe Published Tuesday, 26 January 2016 08:00

With a wife working at the Royal United Hospital in Bath, two sons celebrating at their respective girlfriends, and a third son lying on the bathroom floor feeling rather sorry for himself, Christmas Day was a quiet one in my household, and allowed me to indulge in my favourite pastime – reading. (Lest anyone feel too sorry for me at this stage, rest assured we had a great family celebration on Christmas Eve, and the weekend after Christmas Day). I’d been saving one of Thomas Hardy’s lesser novels for the holiday season and, apart from being surprised at how enjoyable it was (and cheerful, not always guaranteed with Mr H!), I was also brought up by the appearance of the phrase ‘insufficient data’, not once, but twice, within the first hundred or so pages. Of course, it was not being used in a digital, IT sense – although the book did feature the arrival of both the railway and telegraph networks - but referred to characters not having enough information on which to base a particular decision.

I couldn’t help but make the link from Victorian England to the data centre world which keeps me busy when not on holiday. After all, ‘insufficient data’ is the cause of so much inefficiency and extra cost within the facilities that underpin the applications and IT infrastructure that seem to be all-pervasive in the 21<sup>st</sup> century. There’s no doubting that much good progress has been made in terms of the intelligence available to those who run data centres – thanks to the advent of DCIM and various other measuring, monitoring and analysis tools; nor that both facilities and IT infrastructures are becoming ever more efficient, thanks to technology developments that contribute to better design, operation and maintenance of all manner of hardware.

However, with the advent of the Internet of Things, such intelligence and efficiencies are just about to get a whole deal better. Intelligence will be everywhere. Ultimately, there’s no reason why every single piece of hardware – storage, network, servers, power and cooling, lighting and much more – will not have intelligent chips built-in, which will gather crucial operating data and pass it on to a central management tool, that will be able to respond to the information reported to it in near real time and, equally importantly, build up a database of operational and maintenance information that can be interrogated and used to make improvements to the performance of the facilities and IT infrastructure over time. So, there will be immediate benefits – this server is about to crash, the Flash storage is underperforming, here’s what to do, and longer term ones – data centre cooling capacity can be reduced by x percent based on historical data, physical server capacity can be reclaimed thanks to virtualisation consolidation.

Much of this measuring, monitoring and analysis is already possible, but the Internet of Things brings with it a new level of granularity, automation and optimisation, thanks to heightened intelligence. The decision as to which applications should run on which VMs, on which physical servers, what data should reside on which virtual and physical storage media and where can be made taking into account a range of factors, including such things as latency, security and cost. In other words, a complicated, Big Data gathering operation can generate the raw information required to carry out a complex sum that will provide the optimum data centre environment.

Data centre customers should be able to demand and receive an accurate and detailed view of their applications, to give them reassurance that the bill they are paying is as low as it can be – there’s no wasted or inefficient power usage, no wasted or inefficient facilities infrastructure and no wasted or inefficient IT underpinning these applications that are the lifeblood of the business. The end result will be the optimum application stack.

Some facilities and IT vendors are already promoting this approach, and offering IoT tools to help, while some Cloud and managed services providers, alongside colocation providers, are using the new levels of operating intelligence available to not only optimise their own data centre environments, but also to help their customers (actual and potential) understand exactly what they are paying for. It’s no longer enough to agree a ‘simple’ contract that includes fixed power, infrastructure or compute costs. There should be an understanding that, within some kind of predetermined flexibility framework, customers should only pay for what they use and not what they think they might use.

So, from ‘insufficient data’ to more than enough data. And, of course, all this extra new data will require significant extra data centre space to house the extra storage, compute and network resources that will underpin this Big Data application. And then there’s all the other industries that will be leveraging IoT and Big Data to similar effect. So, the data centre industry is about to undergo a period of massive change and massive growth.

The title of the Hardy novel I read over Christmas? ‘The Laodicean’ , which translates as ‘half-hearted or indifferent’. Safe to say that, whether in 19<sup>th</sup> century Wessex, or the 2016 IoT world, there’s no place for such apathy.