‘Inspired by the human brain’: Intel introduces a neuromorphic system that aims to mimic gray matter with a clear goal: to make the machine exponentially faster and much more energy efficient, just like us
Neuromorphic computing is about mimicking the structure of the human brain to enable more efficient data processing, including faster speeds and higher accuracy, and it is a hot topic at the moment. Many universities and technology companies are working on it, including scientists from Intel who built the world’s largest “brain-based” computer system for Sandia National Laboratories in New Mexico.
Intel’s creation, called Hala Point, is only the size of a microwave oven, but has 1.15 billion artificial neurons. That’s a huge step up from the 50 million-neuron capacity of its predecessor, Pohoiki Springs, which debuted four years ago. There’s a theme with Intel’s naming, in case you were wondering: it’s locations in Hawaii.
Hala Point is ten times faster than its predecessor, 15 times denser and with a million circuits on one chip. Pohoiki Springs had only 128,000.
Make full use of it
Equipped with 1,152 Loihi 2 research processors (Loihi is a volcano in Hawaii), the Hala Point system will be tasked with harnessing the power of massive neuromorphic computations. “Our colleagues at Sandia have consistently applied our Loihi hardware in ways we never imagined, and we look forward to their research with Hala Point leading to breakthroughs in the scale, speed and efficiency of many high-impact computing problems,” said Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs.
Because a neuromorphic system of this size has never existed before, Sandia developed special algorithms to ultimately utilize the computer’s full capabilities.
“We believe this new level of experimentation – the beginning, we hope, of large-scale neuromorphic computing – will help create a brain-based system with an unprecedented ability to process, respond to, and benefit from real-life data. to learn,” says lead researcher Sandia. said Craig Vineyard.
His colleague, fellow researcher Brad Aimone, added: “One of the key differences between brain-like computing and regular computing we use today – both in our brains and in neuromorphic computing – is that the computation is spread across many neurons in parallel, instead of across multiple neurons. long, serialized processes that are an inescapable part of conventional computing. As a result, the more neurons we have in a neuromorphic system, the more complex the computations we can perform. We see this in real brains. Even the smallest mammalian brains have tens of millions of neurons; our brains have about 80 billion. We see it in today’s AI algorithms. Bigger is much better.”