How AI is advancing data centre capabilities

Data centres are at the heart of almost all business operations globally, with the majority of companies across the world indicating that they have at least one to two data centres in operation.

Currently, the challenge that faces the vast majority of these businesses is that they have a wealth of data but lack the means to utilise it to make actionable decisions.

AI platforms have the power to benefit businesses by being able to efficiently digest all types of data including video and sensor data, to then offer insights. This can become a critical differentiator for any business that may have accumulated a great deal of data to ingest into such systems.

With this in mind, the first key step for businesses looking to harness the power of AI is to identify which function of AI suits their needs. There are a significant number of ways that AI can be employed, but let’s limit this article for the sake of brevity to two broad categories. For our uses AI workloads can be classified into two key types: machine learning (ML), and deep learning (DL). Understanding both of these AI processes and the types of data they interact with is essential when navigating your options to find the right fit for your business.

Knowing the available AI functions

Let’s start with ML. Typically, given ML’s ability to process larger workloads, it fares well with substantial data pools – especially during the onset of training the decision tree. The more data one is able to utilise for the ML baseline, the more accurate the models it can produce will become. The concern with these types of workloads, however, is that they require a storage platform that has the ability to scale alongside it to meet the demand of the base data, while also growing as more data is collected and sampled.

Next, we have DL. Designed with the intention of mimicking the human thought process, DL is focused on how to represent the world through a hierarchy of concepts. However, unlike ML, DL utilises neural networks instead of traditional decision trees. These neural networks require graphics processing unit (GPU) powered systems to provide the necessary computing power needed to train the AI models. These GPU-powered systems require as much data as possible to prove effective, requiring a system that has the potential to continually and indefinitely scale. Much like ML, these platforms are also much more dependent on high input/output workloads than high throughput.

Methodology and integration

These workloads clearly delineate how AI has been able to advance data centre capabilities beyond a simple data store. So, with these points in mind, it is crucial to understand the end-goal one wants to achieve with data application and storage in order to ensure its proper processing with the corresponding workload methodology.

Once this methodology has been identified, operators then need to understand the storage solution they want to implement in order to conserve maximal resources; similarly, the consideration and prevention of system failures needs to be accounted for to prevent these critical communications from being impeded. The power of AI can help prevent this, whilst also advancing progress simultaneously. A deliciously recursive benefit of AI is that it can be of great benefit to both analysis of data and of the infrsatcure the data resides upon.

Of course, once the most appropriate AI function has been selected, businesses must then consider the integration process. Many organisations may face difficulties when constructing AI data storage, with a multitude of factors having to be considered for storage networking and tuning storage to work in harmony with AI functions.

As such, the lure of pre-packaged storage products that deliver an AI-ready platform by combining popular AI software such as general CPUs and GPUs, networking, and storage, can be appealing. With these packages, much of the fine-tuning calibrations have been carried out prior to the deployment of systems, therefore reducing barriers to AI storage adoption for many. However, this initial convenience – unsurprisingly – comes with a significant price tag.

Futureproofing your investment

Undeniably, selecting the most fitting AI data storage platform is a combination of metrics, such as cost, scalability, and performance. Due to the sheer volume of data that is involved, the pressure to make the right decision for your organisation is crucial.

Choosing an AI function that isn’t quite right for your business can be a costly mistake, in both time and monetary terms. Data centres are being designed to be future proof and to enable long-term cost efficiency by allowing room for potential further increases in power demands.

As is the case with any storage product decision, engaging with vendors to fully understand how their product will fit the requirements of AI is a vital step before the implementation process. Organisations should therefore start by requesting demonstrations and evaluations of any prospective systems, as a prerequisite to any further purchase or implementation decisions. It is through these steps that organisations can ensure they are selecting the most appropriate AI function for their business and making well-informed purchasing decisions.


About the Author

Russell Skingsley is the Hitachi Data Systems CTO for Asia Pacific. Digital transformation improves cost-efficiency, time to market, customer experience, and revenue through better data management. Hitachi Data Systems uses data to power the digital enterprise.

Featured image: ©Gorodenkoff