Whisper it quietly, but data centres do not exist in isolation. They are built to house IT equipment which, in this digital age, runs just about every aspect of our lives. Yes, the servers, storage and networks are what really matter. A data centre is nothing but glorified packaging. Okay, before I upset the whole of the data centre industry(!), let’s step back a little, and agree that the components that make up this packaging, and the overall packaging itself, are very important and that, without them, the IT infrastructure would almost certainly not function, or, at the very best, function very poorly indeed. In other words, there’s an interdependence between the two. Historically, this situation has been grudgingly recognised. Data centre professionals and IT professionals have long been aware of each other, but have kept their interactions to a bare minimum. The data centre manager watches as his building is constructed, kitted out with power, cooling, cabinets, cabling, fire suppression equipment and the like, and then comes that ‘fateful’ day when the IT folk come and make a mess with their boxes, and nothing is ever quite as perfect again in the building. From the IT perspective, the data centre staff just ‘get in the way’. Everyone knows that the IT infrastructure is what drives the applications, so why is the data centre a permanent bottleneck when it comes to power demand, cooling capacities, floor loads and the like? Happily, this adversarial relationship is beginning to change. There’s a growing recognition that an IT-related issue anywhere in the business is a problem for the whole business, and not just the department where the problem lies. Yes, point the finger at the network, or the UPS, or the cabling if it makes you feel good, but if your customers are not receiving the level of service that they expect (and receive from your competitors), it’s a fairly pointless exercise. Far better to understand and embrace the idea that, as everything data centre and IT related is interconnected, let’s all work together to try and, ideally, avoid any problems but, realistically, also address those issues that are bound to arise. Take a very simple example. A storage engineer goes to the data centre, replaces some hard disk drives with some solid state disk, or flash, storage. The application which sits on this new storage doesn’t perform as expected. Further investigation reveals that the existing IT network isn’t able to deliver the speeds expected of it (and also impacts on other application speeds), and the cabinet in the data centre which houses the SSDs keeps overheating. And there’s a simple solution, and one which more and more organisations are adopting. Why not get the storage engineer to explain what he’s about to do to both the data centre manager and the networking engineer, so that, when the SSDs are installed, the cabinet doesn’t overheat, and the network has been upgraded to offer the necessary speed to support the application? Such convergence is relatively simple to organise. Whoever owns a project makes sure to engage with all the relevant IT and data centre staff to ensure that final application performance optimisation is achieved. Humans working with humans. But does this approach work as data centre and IT environments become much more varied and complex as they develop to meet the demands of digital consumers? Typically, a digital business will have a mixture of data centre and/or IT infrastructure assets located on-premise, in colocation facilities, and also in one or more clouds. Even in our earlier example, where humans are largely still dealing with humans and with relatively straightforward data centre and IT infrastructures, monitoring and management software tools are making an important contribution to the smooth running of the components which all contribute to application performance and reliability. How much more so are such software tools needed to try and make sense of the digital data centres, colocation facilities and clouds spread right across the world, all housing and or providing a whole range of IT infrastructure, and all of it required to run a business which might have anything from a single office to multiple, global locations? Enter AIOps – artificial intelligence for IT operations – which offers the promise of an end to the holy grail-like quest for that single pane of glass for infrastructure management. Right now, AIOps might be a new term to many, or if not a new term, then almost definitely a new concept. At the basic level, AIOps consolidates the three separate disciplines of network, application and infrastructure monitoring and management to provide an integrated, single view of a company’s overall data centre and IT infrastructure – whatever combination of on-premise, colocation and (multi)cloud this might be. AIOps offers at least two major benefits. The first of these is the ability to understand your existing infrastructure and how it impacts on the applications that drive your business. Most importantly, this comprehensive knowledge base allows you to make intelligent decisions as to where and when to run which applications on which environment(s). For example, we’re all being told to rush headlong into the Cloud, but your AIOps tool might just point out that a more stable and/or cost-effective solution for a particular application or applications might well be on-premise and/or colocation. So, AIOps provides you with planning and, subsequently, operations certainty. You know that your applications will always be running in the right location(s), with the right infrastructure. As if this is not a powerful enough reason on its own to investigate AIOps further, once the intelligent planning and subsequent provisioning optimisation exercise is over, this new discipline then provides ongoing intelligent infrastructure monitoring and management. A crucial benefit for the digital business, where agility, flexibility, scalability and speed are vital attributes when it comes to customer interaction. In other words, you have a dynamic business environment, which relies on a dynamic data centre and IT infrastructure to deliver, and how can you be sure that this infrastructure is optimised, unless it is constantly monitored, managed and, where necessary, upgraded and changed? AIOps learns about your infrastructure, recommends when and where to move hardware and applications – the decision-making based on the policies that you’ve written. Policies which can grade the importance of applications in both monetary and operational terms. Without labouring the point, it’s safe to say that those companies who have embarked on the early stages of AIOps adoption have, without exception, discovered much about their existing data centre and IT infrastructure of which they were unaware and, almost universally, saved significant sums of money as a result. And assumptions about moving assets and applications to the cloud (it’s far cheaper, much more reliable etc.) have also been challenged by those deploying AIOps tools. I could write much, much more about AIOps, but hopefully what you’ve read above has, at the very least, made you ever just so slightly curious – whether that’s curious as to finding out more about the subject or just curious as to whether I’ve been out in the sun just a little too long (not easy in the UK’s November deluges)!