50x faster, 50x more efficient: British AI startup backed by Arm delivers stunning gains in performance and power consumption using a cheap $30 motherboard

In March 2024, we reported how British AI startup Literal Labs was working to make GPU-based training obsolete with its Tseltin Machine, a machine learning model that uses logic-based learning to classify data.

It works through Tsetlin automata, which establish logical connections between features in input data and classification rules. Based on whether decisions are correct or incorrect, the machine adjusts these connections using rewards or punishments.

This approach, developed by Soviet mathematician Mikhail Tsetlin in the 1960s, contrasts with neural networks by focusing on teaching automata, rather than modeling biological neurons, to perform tasks such as classification and pattern recognition .

Energy efficient design

Now, Literal Labs, backed by Arm, has developed a model using Tsetlin Machines that delivers high accuracy despite its compact size of just 7.29 KB and dramatically improves anomaly detection tasks for edge AI and IoT deployments.

The model was benchmarked by Literal Labs using the MLPerf Inference: Tiny suite and tested on a $30 NUCLEO-H7A3ZI-Q development boardwhich has a 280 MHz ARM Cortex-M7 processor and no AI accelerator. The results show that Literal Labs’ model achieves inference speeds that are 54 times faster than traditional neural networks, while consuming 52 times less energy.

Compared to the industry’s top-performing models, Literal Labs’ model demonstrates both latency improvements and a power-efficient design, making it suitable for low-power devices such as sensors. Its performance makes it feasible for applications in industrial IoT, predictive maintenance and health diagnostics, where fast and accurate detection of anomalies is crucial.

Using such a compact and energy-efficient model could help scale the deployment of AI across industries, reducing costs and increasing the accessibility of AI technology.

Letteral Labs says: “Smaller models are particularly beneficial in such deployments because they require less memory and processing power, allowing them to run on cheaper hardware with lower specifications. This not only reduces costs, but also broadens the range of devices that can support advanced AI functionality, making it feasible to deploy AI solutions at scale in resource-constrained environments.”

More from Ny Breaking

Related Post