What Metrics Should You Evaluate When Looking at Hyperconverged Infrastructure?

Hyperconvergence continues to be a red hot new category of infrastructure

As hyperconverged infrastructure forces a monumental transformation in data center technology, the metrics we use to measure the value of data center technology need to change as well.

Metric shifts happen quite often, such as when Fitbit changed how we measure daily exercise from 30 minutes of activity three days a week to 10,000 steps a day, which is now accepted by The American Heart Association and the World Health Organization.

When it comes to hyperconverged infrastructure, some in the IT industry view the merits of hyperconverged infrastructure through the storage lens. This seems logical because hyperconverged technology offers many benefits on how we provision, consolidate, and manage storage. But the metrics that those select few look at are too focused on storage-specific features, such as the number of nodes or terabytes, rather than VM-related measurements commonly used for other software-defined infrastructures such as the cloud. Since hyperconverged infrastructure shifts the paradigm from managing infrastructure components to managing VMs, there should also be a shift in the metrics used to measure it.

But with bias present among the vendors, how will customers find the true hyperconverged metrics that matter?

In 2016 when hyperconverged adoption was expanding the most in its history, ActualTech Media conducted its State of Hyperconverged Infrastructure survey of 1,000 IT professionals. The goal of the report was to assess what the top challenges were and how hyperconverged could address them. When asked which criteria are most important for evaluating IT solutions, respondents indicated cost/ROI, operational efficiency (defined by scalability and performance options), and resiliency (defined by the high availability and integrated backup and replication). (See figure 6 below.)

Based on these responses, positive business outcomes seem to be the top goal when evaluating IT solutions. Looking at the identified themes from the chart, they all involve making a long-term investment that will ultimately save time, money, and in some cases, resources. This is likely why operational efficiency factors are a top criteria, because the company will ultimately save OPEX. Disaster recovery and high availability also fit this mold because these factors improve IT resiliency, which in turn, limits data center downtime and the risk of financial loss that comes with it.

IDC also found similar results in their survey, as shown in the HPE SimpliVity Hyperconvergence Drives Operational Efficiency and Customers are Benefiting white paper. Based on the results, HPE SimpliVity powered by Intel®  customers primarily were looking for cost-saving factors including improved operational efficiency, improved backup/disaster recovery, improved storage utilization, improved scalability, and data center consolidation – all fairly equally. (See figure 3 below.)

It stands to reason then that IT professionals are looking for the following metrics when considering hyperconverged solutions:

  • Cost metrics: ROI, total cost of ownership (TCO), CAPEX and OPEX savings
  • Operational efficiency metrics: time to deployment, VM to administrator ratios, device consolidation, power usage effectiveness (PUE)
  • Recovery and availability metrics: ability to sustain a device failure without data loss, recovery point objectives (RTOs/RPOs), downtime/uptime percentage

The next chart (figure 5 below) explores the percent improvement experienced by HPE SimpliVity customers as reported in the same IDC survey. Improvement rates for survey respondents were quite high for all the areas listed, with backup/recovery improving at a staggering 70%. Nine of the 12 choices provided in the survey were listed as areas that saw above 50% improvement rate. Of those areas identified as the most important to hyperconverged customers, HPE SimpliVity improved CAPEX by 41%, time to market by 54%, and the cost of power and cooling by 47%.

Business outcomes are what determines the critical metrics that define IT success, and the solutions that meet this goal are ultimately selected. HPE SimpliVity is one example of such a solution. Hyperconverged solutions like HPE SimpliVity aim to improve IT environments based on those top challenges including backup/disaster recovery, storage utilization, scalability, downtime/uptime, and availability, with all the cost and efficiency savings businesses and IT teams need.

About the Author

Jesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook.