Eerie Valley! Watch as a creepy humanoid robot mimics a researcher’s facial expressions in real time – with uncanny precision

If we want to live in a world where we communicate with robots, they will have to be able to read and respond to our facial expressions very quickly.

Now scientists have moved one step closer to creating such an advanced machine.

Built by experts at Columbia University in New York, ‘Emo’ is the fastest humanoid in the world when it comes to mimicking someone’s facial expressions.

In fact, it can ‘predict’ a person’s smile by looking for and imitating subtle signals in the facial muscles so that they smile effectively at the same time.

Amazing video shows the bot copying a researcher’s facial expressions in real time with uncanny precision and remarkable speed, thanks to cameras in its eyes.

Columbia engineers build Emo, a silicone-coated robot face that makes eye contact while effectively anticipating and replicating a person’s smile

Emo is the creation of researchers at Columbia University’s Creative Machines Lab in New York, who present their work in a new study in Scientific reports.

“We believe that robots must learn to anticipate and imitate human expressions as a first step before maturing toward more spontaneous and self-directed expressive communication,” they say.

Most robots currently being developed around the world – such as the British bot Ameca – are trained to mimic a person’s face.

But Emo has the added advantage of ‘predicting’ when someone will smile, so she can smile at about the same time.

This creates a ‘more authentic’, human-like interaction between the two.

The researchers are working on a future where people and robots can have conversations and even connections, like Bender and Fry on ‘Futurama’.

“Imagine a world where interacting with a robot feels as natural and comfortable as talking to a friend,” said Hod Lipson, director of the Creative Machines Lab.

Researchers think that robots' nonverbal communication skills are being overlooked.  Emo is pictured here with Yuhang Hu of Creative Machines Lab

Researchers think that robots’ nonverbal communication skills are being overlooked. Emo is pictured here with Yuhang Hu of Creative Machines Lab

The researchers are working on a future where people and robots can have conversations and even connections, like Bender and Fry on 'Futurama' (photo)

The researchers are working on a future where people and robots can have conversations and even connections, like Bender and Fry on ‘Futurama’ (photo)

“By developing robots that can accurately interpret and imitate human expressions, we move closer to a future where robots can be seamlessly integrated into our daily lives and provide companionship, assistance and even empathy.”

Emo is covered in a soft blue silicone skin, but beneath this layer are 26 tiny motors that power lifelike movements, similar to the muscles in a human face.

There are also high-resolution cameras in the pupil of each eye, which are necessary to predict human facial expressions.

To train Emo, the team recorded videos of human facial expressions so the robot could observe frame by frame for a few hours.

After training, Emo was able to predict people’s facial expressions by observing small changes in their faces as they began to smile with intent.

‘Before a smile fully forms, there are brief moments when the corners of the mouth lift and the eyes begin to wrinkle slightly,’ study author Yuhang Hu from Columbia University told MailOnline.

‘Emo is trained to recognize these types of signals through its predictive model, allowing it to anticipate human facial expressions.’

Emo is a humanoid head with a face equipped with 26 actuators that allow for a wide range of nuanced facial expressions.  The head is covered in a soft silicone skin with a magnetic attachment system and has high-resolution cameras in the pupil of each eye

Emo is a humanoid head with a face equipped with 26 actuators that allow for a wide range of nuanced facial expressions. The head is covered in a soft silicone skin with a magnetic attachment system and has high-resolution cameras in the pupil of each eye

Emo can not only copy a person's smile, but anticipate their smile - meaning the two can smile at approximately the same time

Emo can not only copy a person’s smile, but anticipate their smile – meaning the two can smile at approximately the same time

In addition to a smile, Hu says Emo can also predict other facial expressions, such as sadness, anger and surprise.

“Such predicted expressions can not only be used for co-expression, but can also be used for other purposes for human-robot interaction,” he said.

Emo can’t yet make the full human range of expressions because he only has 26 ‘facial muscles’ (motors), but the team will ‘keep adding’.

The researchers are now working on integrating verbal communication using a large language model such as ChatGPT in Emo.

This way, Emo should be able to answer questions and have conversations, just like many of the other humanoids currently being built, such as Ameca and Ai-Da.

Meet the world’s first AI child: Chinese scientists develop a creepy entity called Tong Tong that looks and acts like a three-year-old child

It may look and act like a little girl, but this creepy entity could be the next big breakthrough in artificial intelligence (AI).

Tong Tong, which means ‘little girl’, has been dubbed the world’s first AI child after it was unveiled by scientists at the Beijing Institute for General Artificial Intelligence (BIGAI).

According to the creators, the AI ​​child can assign itself tasks, learn autonomously and explore its environment.

And while it sounds like the plot of the sci-fi movie The Creator, Tong Tong engineers say the AI ​​can even experience emotions.

In a video, BIGAI says that Tong Tong “has her own joy, anger and sadness.”

read more