HPE has announced the expansion of its GreenLake portfolio with the launch of a supercomputing cloud service designed to help companies train, tune, and deploy large-scale artificial intelligence that goes on to power tools like AI writers.
The company hopes its HPE GreenLake for Large Language Models (LLMs) will be as beneficial to startups as for global enterprises, and is a step in the direction of bringing applications that support climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation to market.
German AI startup Aleph Alpha represents HPE’s first partner, which will see its pre-trained LLM reach the hands of customers, which has been optimized for analyzing and processing large amounts of data. The model in question, Luminous, is available in a multitude of languages including English, French, German, Italian, and Spanish.
HPE GreenLake for LLMs
HPE CEO Antonio Neri said: “Organizations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly.”
Comparing GreenLake for LLMs with its other cloud computing offerings, HPE describes how the AI-native architecture has been purpose-built specifically for large-scale AI training and simulation workloads.
It is hoped that the on-demand service will unlock cost-saving potential for HPE’s customers while giving them access to HPE Cray XD supercomputers and Nvidia H100 GPUs.
Order books are open and customers are invited to sign up now, however general availability is yet to be announced. HPE hopes that, for North America, this means the end of calendar year 2023, but Europe and other regions will have to wait longer.
For some, the wait may be worth the while, because HPE intends to make its cloud deployments carbon neutral, including GreenLake for LLMs which looks be in order to benefit from renewable energy and recyclable liquid cooling technology.