High Performance Computing in a data centre Once upon a time, High Performance Computing (HPC) was the preserve of only the very wealthiest of businesses and, somewhat bizarrely, the very ‘poorest’ of academic and research facilities. The investment required to set up the necessary computing infrastructure – servers, storage, networks and the like – to process vast data sets (the original Big Data) was beyond the reach of the vast majority of companies. Put simply, the return on investment (ROI) from HPC for most businesses just wasn’t there. Yes, plenty of organisations would have liked to run large database queries, but the time and money required to make these happen was not matched by anything other than the vaguest promise of some kind of a return to the business– not something to set the finance department rushing to invest. Fast forward to the present time, and the great news is that, thanks to the large scale IT infrastructure investments made by colocation and cloud providers, HPC is now not just affordable for all, but little more than a no brainer. No longer is large scale number crunching restricted to the oil and gas companies searching for new resources; or the finance industry looking to develop ever more sophisticated algorithms that will provide a crucial, real-time trading edge; or the research community working towards the next best-selling drug or cancer cure. No, HPC has become ‘democratised’. Where, previously, building your own data centre (BYODC!) to perform HPC made no financial sense to most businesses, accessing an HPC infrastructure courtesy of a colocation or cloud provider for a few hours or days, to run a Big Data project, makes perfect sense. The colocation/cloud provider has made the major investment in the required IT infrastructure, and then rents it out to many different businesses to run their various data queries. The end users pay a fraction of the cost of the IT infrastructure, thus bringing an HPC ROI within realistic reach; the HPC infrastructure provider also has an achievable ROI as many organisations hire this resource. A win/win scenario. So, provisioning your own, potentially infrequently used, HPC environment makes little financial sense – the more so when one considers that the infrastructure will need to be maintained and updated over time. And then there’s the space requirement. Although the move to high-density data centres means that, in theory at least, some space can be ‘reclaimed’, the reality is that an in-house data centre has a finite amount of space that is under severe pressure. Adding HPC infrastructure to this environment may not be possible or practical, whether or not it’s affordable. The more so, when one considers that the servers and equipment used in HPC often require specialist racks to accommodate it, not to mention the fact that, once the jump to a high density, HPC solution is made, there’s the need to invest in the necessary extra cooling infrastructure. In an in-house data centre, often the installation of this extra cooling means leaving an exclusion zone around the HPC rack. With a colocation data centre provider, there’s no need for this costly supplementary cooling, as the colocation environment is already built to cater for HPC. And what about the ability to scale? Clearly, the amount of IT resource required by any particular HPC project will vary. So, do you build the largest possible HPC infrastructure for that ‘just in case’ scenario – and have most of it sit idle for most of the time? Or, do you build for a ‘sensible’ HPC expectation, and then have a major upheaval when you realise that you didn’t build the infrastructure big enough? Scaling via a colocation or service provider is as easy as requesting (and paying for) some extra compute resource for the (short or long) time required. Of course, all of the above in-house discussion assumes that there’s one or more individual in your organisation who has the necessary expertise to run such an HPC environment. Not impossible, but there’s a fair chance that your colocation or service provider has access to rather more HPC expertise and professional resource than you do! We may be in post-Brexit times, but the issue of compliance isn’t going anywhere soon. Indeed, one could argue that it’s just become rather more complicated. While it’s unlikely that there are any more than a handful of individuals on the planet who might pass a rigorous exam on the topic of compliance right now, it’s highly likely that most of these knowledgeable folk are working for specialist colocation and service providers, rather than end user enterprises. With so many nuances surrounding the use and storage of data (who, where, what and how), the comfort of being able to trust a colocation or service provider to get it right is potentially rather more attractive than the stress of worrying if your in-house HPC environment ticks all the compliance boxes. In summary, colocations and service providers have brought the infinite possibilities of HPC to all – and with the Internet of Things threatening/promising to make today’s petabytes of bits and bytes seem like an insignificant drop in the data ocean, few, if any, companies can afford to ignore this opportunity. After all, if you don’t gain a deep understanding of what your customers want from you now and in the future, then somebody else will.