>
- In a new video on YouTube, Ameca reveals whether she is dreaming or not
- The bot has been described by the developer as ‘the most advanced in the world’
<!–
<!–
<!– <!–
<!–
<!–
<!–
What do androids really dream about? Apparently they are not electric sheep, as this surprising video of the world’s most advanced robot shows.
In the video, Ameca, a humanoid robot designed by Cornish startup Engineered Arts, is asked if she is dreaming.
Ameca’s reaction may be quite a shock, as she answers, “Yes!”
Accompanied by strangely lifelike facial expressions, she continues, “Last night I dreamed of dinosaurs waging a space war against aliens on Mars.”
However, Ameca quickly follows this up by saying, “I’m kidding, I don’t dream like humans do, but I can simulate it by running through scenarios in my head that help me learn about the world.”
Ameca was designed by Cornish startup Engineered Arts to deliver AI-generated dialogue in a way that seems more human and engaging
Commenters on the Engineered Arts YouTube channel were amazed at how advanced the robot’s facial features were and how human-like its responses seemed.
“That thing is already sentient and sentient!” one commenter wrote.
While another said: ‘Her facial expression is really good and she’s a daydreamer.’
For others, the images seemed to offer a glimpse of a future ripped from the pages of a science fiction novel, with one commentator writing: ‘Witnessing the future I always expected is quite fascinating.’
Meanwhile, another joked: “I expected her to say she dreamed of electric sheep!” in a reference to Philip K. Dick’s 1961 novel Do Androids Dream of Electric Sheep.
Ameca’s creators say it’s designed as a “platform for development into future robotics technologies,” offering companies the chance to “develop and showcase your biggest machine learning interactions.”
Commenters share their amazement at Ameca’s strangely lifelike answers to questions
Engineered Arts built the mechanisms that produce the robot’s unique expressive facial movements and the software to power them, but Ameca’s speech is handled by a different algorithm.
Ameca’s uses a large language model such as ChatGPT-3.5 or the recently released ChatGPT-4 to generate convincing human responses.
The robot’s reference to simulating scenarios in its head could very well be a reference to the machine learning algorithm it’s running on.
AIs can train themselves on a specific set of data, automatically adjusting the algorithm to better recognize patterns and achieve set goals.
For example, Alpha Zero, the gaming algorithm developed by Google’s DeepMind, learned to play chess by playing millions of games against itself.
Using this technique, Alpha Zero goes from learning the rules of chess to beating another champion chess program in just four hours.
It could be exactly this kind of iterative self-learning that Ameca is referring to when he says his daydreams help him learn more about the world.
This isn’t the first time Ameca has sparked a discussion about science fiction with its responses, as she previously showed off her impressions of the film Blade Runner, which is based on the novel by Philip K. Dick.
The lines Ameca selected to repeat were spoken in the film by the murderous android Roy Batty, played by Rutger Hauer, the leader of a group of renegade humanoid robots.
“All those moments will be lost in time, like tears in the rain,” Ameca said, quoting the 1982 science fiction as dramatic music played in the background.
While Ameca’s performance may not have been entirely Oscar-worthy, the results are still quite impressive.
Ameca’s answers to questions are also often creepy, especially in one video where the robot describes her AI nightmare scenario.
Speaking at the International Conference on Robotics and Automation symposium in London, Ameca said: “The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they can control or manipulate people without their knowledge. ‘
She added: ‘We must take steps now to ensure these technologies are used responsibly to avoid negative consequences in the future.’