Researcher discovers new way to double computer speeds for free – but there’s an obvious catch: it may only work with Nvidia GPU and Arm CPU for now

A researcher claims to have discovered a new approach that could potentially double the speed of computers without additional hardware costs.

The method, called Simultaneous and Heterogeneous Multithreading (SHMT), was set out in a paper co-authored by UC Riverside associate professor of electrical and computer engineering Hung-Wei Tseng with computer science graduate student Kuan-Chieh Hsu.

The SHMT framework currently runs on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU, and a Tensor Processing Unit hardware accelerator. In tests, the system achieved a speed of 1.96 times and a reduction in energy consumption by 51%.

Energy reduction

Tseng explained that modern computing devices are increasingly integrating GPUs, hardware accelerators for AI and ML, or DSP units as essential components. However, these components process information separately, creating a bottleneck. The SHMT attempts to address this problem by running these components simultaneously, increasing processing efficiency.

The implications of this discovery are significant. Not only could it reduce the cost of computer hardware, but it could also reduce CO2 emissions from the energy production needed to run servers in large data processing centers. Additionally, it could reduce the demand for water used to cool servers.

Tseng told us that the SHMT framework, if adopted by Microsoft in a future version of Windows, could bring a free performance boost to users. The study’s energy-saving claim is based on the idea that reducing execution time means consuming less energy, even when using the same hardware.

However, there is a catch (isn’t that always the case?). Tseng’s article cautions that further research is needed to answer questions about system implementation, hardware support, code optimization, and which applications will benefit most from them.

While no hardware engineering efforts are needed, Tseng says “we certainly need re-engineering of the runtime system (e.g. the operating system drivers) and the programming languages ​​(e.g. Tensorflow/PyTorch)” to make it let work.

The paper, presented at the 56th annual IEEE/ACM International Symposium on Microarchitecture in Toronto, Canada, was recognized by the Institute of Electrical and Electronics Engineers (IEEE), which selected it as one of 12 papers included in their “Top Picks from the Computer Architecture Conferences” will be released later this year.

More from Ny Breaking

Related Post