Robots could become much better learners thanks to the groundbreaking method developed by Dyson-backed research. Removing the traditional complexity of teaching robots how to perform tasks will make them even more human

One of the biggest hurdles in teaching robots new skills is how to convert complex, high-dimensional data, such as images from onboard RGB cameras, into actions that achieve specific goals. Existing methods typically rely on 3D representations that require accurate depth information, or use hierarchical predictions that work with motion planners or discrete policies.

Researchers from Imperial College London and the Dyson Robot Learning Lab have unveiled a new approach that could tackle this problem. The “Render and Diffuse” (R&D) method aims to bridge the gap between high-dimensional observations and low-level robot actions, especially when data is scarce.

R&D, described in an article published on the arXiv preprint server, tackles the problem by using virtual representations of a 3D model of the robot. By displaying low-level actions within the observation space, researchers were able to simplify the learning process.

(Image credit: Vosylius et al.)

Visualize their actions in an image