The expression matching robot will haunt your dreams, but one day it may be your only friend

Most of the best robots, which can walk, run, climb stairs and do parkour, don’t have faces, and there may be a good reason for that. If any of them had mugs like those of this new research robot, we would probably stand in front of them and stare wordlessly as they ran over us.

Building robots with faces and the ability to mimic human expressions is an ongoing fascination in the robotics research world, but while it may take less battery power and fewer load-bearing motors to make it work, the bar is much higher for a robot. then smile for a robot jump.

Still, Columbia Engineering’s development of its latest robot, Emo and “Human-robot Facial Co-Expression,” is impressive and important work. In a recently published scientific article And Youtube videoresearchers describe their work and demonstrate Emo’s ability to make eye contact and instantly imitate and replicate human expression.

To say the robot’s range of human-like expressions is creepy would be an understatement. Like many robot faces of its generation, the shape of the head, eyes and silicone skin all resemble a human face, but not enough to avoid the dreaded uncanny valley.

That’s okay, because the goal of Emo isn’t to bring a talking robot head into your home today. This is about programming, testing and learning… and maybe bringing an expressive robot into your home in the future.

Emo’s eyes are equipped with two high-resolution cameras that allow him to make “eye contact” and, using one of his algorithms, look at you and predict your facial expressions.

Because human interaction often involves modeling, which means we often unconsciously imitate the movements and expressions of those we interact with (cross your arms in a group and gradually watch as everyone else crosses their arms), Emo uses his second model to imitate the facial expression. predicted.

“By sensing subtle changes in a human face, the robot was able to predict an impending smile 839 milliseconds before the human smiled and adjust its face to smile at the same time.” the researchers write in their paper.

In the video, Emo’s facial expressions change as quickly as the researcher’s. No one will argue that his smile resembles a normal human smile, that his sad look isn’t cringe-inducing, or his look of surprise isn’t terrifying, but the 26 subcutaneous actuators come pretty close to delivering recognizable human expression.

(Image credit: Columbia Engineering)

“I think predicting human facial expressions represents a major step forward in human-robot interaction. Traditionally, robots have not been designed to take humans into account,” Columbia PhD student Yuhang Hu says in the video.

How Emo learned human expressions is even more fascinating. To understand how his own face and motors work, the researchers placed Emo in front of a camera and had him make any facial expression he wanted. This allowed Emo to learn the connection between his motor movements and the resulting expressions.

They also trained the AI ​​on real human expressions. The combination of these training methods makes Emo about as close to immediate human expression as we’ve seen in a robot.

The goal, researchers note in the video, is for Emo to potentially become a front end for an AI or artificial general intelligence (basically a thinking AI).

Emo arrives just weeks after Figure AI unveiled its OpenAI-infused Figure 01 robot and its ability to understand and act on human conversations. Notably, that robot had no face.

I can’t help but imagine what an Emo head on a Figure 01 robot would look like. That is a future worth losing sleep over

You might like it too

Related Post