top of page

Artificial Intelligence in the Data Centre?


This month I have been asked to write about the opportunities for optimisation provided by applying artificial intelligence (AI) in the data centre, and immediately struck by the thought that applying human intelligence would be a good place to start! Is it ‘intelligent’ to refrigerate hardware that can run continuously at 32°C inlet or design for full load when most facilities never rise above 50% for the first 5 years or to fit bullet-proof glass, biometrics and vehicle-traps into a facility that mainly houses a price-comparison web-site for on-line dating services? Anyway, let’s look at AI first.

Intelligence is an attribute that enables the possessor to assimilate and process information to its own benefit by creating and applying adaptions to its behaviour – and even for its very existence as Darwin postulated. Homo sapiens owe their development from the cave to today to the application of their capacity for applying intelligence to their place in the world around them albeit, of course, together with an innate capacity for stupidity in spades. Here we must not confuse the intelligence of Einstein in mentally visualising the mechanics of the universe with instincts like the craftiness of the fox or the ability of salmon to return to the precise spot of their spawning every year. In fact, the very best AI we can boast of today hardly even approaches those instincts. What we call AI is the collection of data, storage and processing with algorithms written by humans or complied by machines that are programmed by humans. So far there is nothing ‘artificial’ about it except for the speed and accuracy – it never gets tired, bored, distracted or forgetful so it appears to be clever. Our ICT progress is exponential in terms of processing speed and memory capacity but we are far behind nature and set on a path that will never catch up. For a good read on how far we must go find a book written in 2006 by Raymond Kurzweil entitled ‘The Singularity is Near’ – fascinating, albeit rather optimistic, but it is clear that the world’s fastest supercomputers are currently only capable of downloading the neural network of an insect so ‘true’ AI is far away from reality in the data centre.

So, given that AI in the data centre M&E infrastructure is embryonic in its hype-cycle, how far have we got with basic control and, ultimately, automation? As Lord Kelvin (1824-1907) said, ‘you can’t manage what you don’t measure’, so monitoring is the prerequisite to all forms of control and we have had controls in the form of Building Management System’s (BMS) in the data centre since the very beginnings of the industry in the 1950s. BMS have always tended not to ‘manage’ anything (which is a good thing) but, rather, gather status and alarms and display them centrally so that ‘someone’ (if watching) can react to an undesirable event. They have developed from physical annunciators of engraved plastic mimic panels with embedded status lamps and an alarm buzzer through to multi-screen SCADA displays that the operator can use to drill down into the sub-systems such as UPS or cooling. One almost unnecessary step was (and still is) when the controllers of the sub-systems (e.g. UPS) are duplicated within the BMS instead of simply being on a separate screen. Logically the act of having a dedicated system display rather than a screen-within-a-screen has always seemed preferable to me but BMS have gotten more feature-rich, complex and expensive. Rather like having a microprocessor controlling the time in a toaster? Anyway, the BMS is alive and well in the data centre and is, at best, an alarm-manager and, at worst, can turn things on & off. The perfect BMS is one that reports everything but controls nothing or, if it has a control function, fails in a benign way.

In parallel to the BMS the other silo in the data centre (the ICT department) run spreadsheets or software packages for asset management – logging where hardware is installed and with some monitoring or recording of the power consumed. Some of these software packages are quite sophisticated for load-planning but always need human intelligence to be applied to ensure that the ‘question’ is within the physical limits of the infrastructure. A prime example is where the software will allow the user to install 15kW in a cabinet without ‘warning’ but the cold aisles are only 1200mm wide and the raised floor can’t get enough cooling air to the front of the cabinet. As a point for further discussion the most sophisticated load-planning tools use mini-CFD applications but these, despite looking mighty powerful and predicting hot-spots etc, are never based on the reality of each server varying its fan speed with IT load.

The next development was to add and Energy Management System (EMS) although often this was/is an add-on module to the BMS. Fitting the right sensors in the right places allows the EMS to record and report on every aspect of the power flow in the data centre with constant updating of the annualised PUE etc. The key for our purposes here is the ‘M’ in EMS, just like the ‘M’ in BMS, doesn’t manage anything – EMS simply presents the data for you to interpret and act on. You are the ‘manager’.

And so we arrive at the latest incarnation of ‘smart’ in data centre management, Data Centre Infrastructure Management (DCIM, pronounced dee-sim). Many industries have this desire for ‘one-window-on-their-world’ and DCIM claims to provide that window for the data centre. Pulling together a picture of power, cooling, fire, cabling and ICT hardware position, asset management and utilisation DCIM regurgitates data in ever-better load management suggestions and pretty pictures, DCIM has only started on its journey and does not yet fulfil its promise. It doesn’t ‘manage’ anything, just like BMS and EMS, and, importantly, does not usually replace BMS. If it does appear to replace BMS then it is usually because its OEM used to make BMS and added an asset management and load-planning module to their product. From the other direction load-planning software OEMs have added modules for power and cooling to create DCIM a product.

Now don’t get me wrong, DCIM produces lots of good data in very attractive formats that enable the user to make well-informed decisions, if they have the staff and time to look at it. The downside is that DCIM doesn’t manage anything and therefore cannot save you any energy or optimise anything. It has no artificial intelligence and needs human intelligence to turn the data presented into actions. And in that statement, is the rub; DCIM has no definable Return-on-Investment (RoI). It is a (very) nice-to-have that is expensive but does nothing that you can’t do with a BMS and some spreadsheets given a little time, understanding and applying intelligence.

So what is the future of controls like DCIM in the data centre? Well, in my opinion, only when the DCIM monitors the incoming IT work-load and is told (by the hypervisor?) where it is going and then controls the power and cooling system to minimise energy consumption will it fulfil the ‘M’ in DCIM. But which will take control, the DCIM or the hypervisor – or are they both together just the ultimate manifestation of a future DCIM? But will anyone trust automatic centralised control? What about redundancy, security, hackers? There is also the problem that the fastest growing data centre sector is collocation where, more often than not, the data centre operator has no control over the ICT stack so DCIM is almost impossible to justify.

Back to the question: Can AI be applied in the data center? Algorithms written by humans, yes, and probably soon, but may not be adoptable in our paranoid-ridden industry. Real AI is probably more than 50 years away… God help us.

  • LinkedIn DLE Consulting
bottom of page