Hewlett Packard Enterprise accelerates enterprise AI adoption with HPE GreenLake for File Storage

0

Hewlett Packard Enterprise announced the expansion of its HPE GreenLake for File Storage capabilities designed to power large-scale enterprise AI and data lake workloads. The latest iteration introduces 100% high-density all-flash options, propelling the company to the forefront of AI storage innovation.

“To drive AI initiatives and remain competitive in today’s marketplace, acquiring an AI-ready file storage solution offering enterprise-grade performance, efficiency, and customised to AI scalability, is essential,” said Kamal Kashyap, Director, India – Storage Business Unit, Hewlett Packard Enterprise India. “HPE GreenLake for File Storage plays a significant role in amplifying the efficiency of high-volume data applications, ensuring enterprise-level performance at AI scale. It is not just a storage solution, but a catalyst for transformation. As businesses of today are moving into or expanding their AI initiatives, we at HPE are empowering them to harness the potential of AI, extract maximum value from their data, and enhance productivity, all while promoting sustainability.”

When compared with the currently available version of HPE GreenLake for File Storage, the new options offer 4 times the capacity and up to 2 times the system performance per rack unit. These improvements increase AI throughput by a factor of 2, and reduce power consumption by up to 50%. Through these enhancements, HPE is taking another major step to enable customers to achieve enterprise performance, simplicity, and enhanced efficiency, all at the scale of AI and data lakes. With the latest high-density storage racks, HPE GreenLake for File Storage has increased the capacity density of the high-end offering by a factor of 7 compared with what was published in mid-2023. In addition, HPE GreenLake for File Storage now offers up to 2.3 times the capacity density of competitors.

Powering enterprise performance at AI scale
HPE GreenLake for File Storage accelerates the most data-intensive applications with enterprise AI-scale performance. This is the performance that spans all the stages of AI – from data aggregation, data preparation, training, and tuning to inference. Moreover, it’s not just performance that peaks at a given time for a small data set. Instead, it’s fast, sustained performance that spans the full scale of your data for the most demanding, data-intensive AI applications, including GenAI and large language models (LLMs). Enterprise-scale AI performance helps extract more value from all the aggregated data, accelerating insights and providing a real competitive edge.

HPE GreenLake for File Storage has a disaggregated, shared-everything, highly resilient modular architecture that allows to scale performance and capacity independently — and it’s designed for exabyte scale. With all-NVMe speed for fast, predictable performance and no front-end caching, data movement between media, or tiered data pipelines, it can supercharge the most data-intensive AI applications.

Enhancing efficiency at AI scale
HPE GreenLake for File Storage can bring down the AI storage costs with 4x the capacity per RU density and half the power consumption. It can also lower carbon footprint with industry-leading data reduction, non-disruptive upgrades, and an AI storage as-a-service consumption model that helps eliminate overprovisioning. It enables scaling performance, enhances capacity independently for higher efficiency at lower cost, and maximises GPU utilisation — and therefore GPU ROI — with enterprise performance at AI scale.

HPE GreenLake for File Storage provides overload-free snapshots and native replication, superior flash efficiency, and enhanced data reduction via the similarity algorithm which, unlike compression and deduplication, reduces data with both a global and fine-grained approach. Savings are 2:1 for life sciences data; 3:1 for pre-reduced backups, pre-compressed log files, and HPC and animation data; and 8:1 for uncompressed time series data.

With support for optimised GPU utilisation via InfiniBand, NVIDIA GPUDirect® and RDMA, HPE GreenLake for File Storage accelerates AI workloads by improving performance for model training and tuning via faster checkpoints. InfiniBand connectivity from the front-end host to networks, including the NVIDIA Quantum-2 InfiniBand platform, offers flexibility. Customers can scale up to 720 Po of effective capacity (with 3:1 data reduction) for large-scale enterprise AI file data.

LEAVE A REPLY

Please enter your comment!
Please enter your name here