Hewlett Packard Enterprise is putting memory ahead of processing with a game-changing new computing concept.
Powered by their groundbreaking research project, The Machine, the new proof-of-concept work enables them to realise currently unachievable performance and efficiency gains by transforming the intrinsic computing model of the last half century.
“We have achieved a major milestone with The Machine research project — one of the largest and most complex research projects in our company’s history,” said Antonio Neri, Executive Vice President and General Manager of the Enterprise Group at HPE.
“With this prototype, we have demonstrated the potential of Memory-Driven Computing and also opened the door to immediate innovation. Our customers and the industry as a whole can expect to benefit from these advancements as we continue our pursuit of game-changing technologies.”
Designed by researchers at HPE and its research arm, Hewlett Packard Labs, the prototype, which was brought online in October, shows the building blocks of the new architecture working together. HPE has demonstrated:
- Compute nodes accessing a shared pool of Fabric-Attached Memory;
- An optimised Linux-based operating system (OS) running on a customised System on a Chip;
- Photonics/Optical communication links, including the new X1 photonics module, are online and operational; and
- New software programming tools designed to take advantage of abundant persistent memory.
During the design phase of the prototype, simulations predicted the speed of this architecture would improve current computing by multiple orders of magnitude. The company has run new software programming tools on existing products, illustrating improved execution speeds of up to 8,000 times on a variety of workloads. HPE expects to achieve similar results as it expands the capacity of the prototype with more nodes and memory.
In addition to bringing added capacity online, The Machine research project will increase focus on exascale computing, a developing area of High Performance Computing that aims to create computers several orders of magnitude more powerful than any system online today. HPE’s Memory-Driven Computing architecture is incredibly scalable, from tiny IoT devices to the exascale, making it an ideal foundation for a wide range of emerging high-performance compute and data intensive workloads, including big data analytics.
Memory-Driven Computing & Commercialization
HPE is committed to rapidly commercialising the technologies developed under The Machine research project into new and existing products. These technologies currently fall into four categories: Non-volatile memory, fabric (including photonics), ecosystem enablement and security.
HPE continues its work to bring true, byte-addressable NVM to market and plans to introduce it as soon as 2018/2019. Using technologies from The Machine project, the company developed HPE Persistent Memory — a step on the path to byte-addressable non-volatile memory, which aims to approach the performance of DRAM while offering the capacity and persistence of traditional storage. The company launched HPE Persistent Memory in the HPE ProLiant DL360 and DL380 Gen9 servers.
Fabric (including Photonics)
Due to our photonics research, HPE has taken steps to future-proof products, such as enabling HPE Synergy systems that will be available next year to accept future photonics/optics technologies currently in advanced development. Looking beyond, HPE plans to integrate photonics into additional product lines, including its storage portfolio, as soon as 2018/2019. The company also plans to bring to market fabric-attached memory, leveraging the high-performance interconnect protocol being developed under the recently announced Gen-Z Consortium, of which HPE recently joined.
Much work has already been completed to build software for future memory-driven systems. HPE launched a Hortonworks/Spark collaboration this year to bring software built for Memory-Driven Computing to market. In June 2016, the company also began releasing code packages on Github to begin familiarizing developers with programming on the new memory-driven architecture. The company plans to put this code into existing systems within the next year and will develop next-generation analytics and applications into new systems as soon as 2018/2019. As part of the Gen-Z Consortium HPE plans to start integrating ecosystem technology and specifications from this industry collaboration into a range of products during the next few years.
With this prototype, HPE demonstrated new, secure memory interconnects in line with its vision to embed security throughout the entire hardware and software stack. HPE plans to further this work with new hardware security features in the next year, followed by new software security features over the next three years. Beginning in 2020, the company plans to bring these solutions together with additional security technologies currently in the research phase.
Read Keith McAuliffe’s blog post