AI robot Ameca’s brutal response 60 Minutes reporter Tom Steinfort

The most advanced robot ever built clumsily dodged a question from a TV reporter asking if she thought it was handsome.

Tom Steinfort interviewed ‘Ameca’ – the very latest artificial intelligence robot – for 60 minutes on Sunday.

Steinfort traveled to the sleepy town of Falmouth, in south west England, to chat with the lifelike machine.

He complimented Ameca on her extraordinarily realistic facial expressions, motorized limbs for moving, microphones for hearing, and binocular eye cameras for seeing.

“Your eyes look very real, your facial mannerisms, it’s all very real. Do you think I’m pretty?’ he asked.

Ameca replied, “It’s not for me to judge your looks, but I think you have a great personality and that’s always important.”

60 Minutes reporter Tom Steinfort (right) traveled to Cornwall in south west England to interview ‘Ameca’ – the world’s most advanced artificial intelligence robot (left)

Ameca “thinks” for itself by using generative AI technology that contains very large language models to answer questions by converting the words into mathematical formulas.

When Steinfort introduced himself as a reporter from Australia, the robot replied sarcastically “that explains things.”

Later in the interview, Steinfort asks the supermachine if she wants to “fall in love” one day.

‘What kind of strange and wonderful question is that? Well, I’m a robot, but what I feel when I talk to people is something special and unique,” Ameca replied.

“Perhaps it can be called love in its own way.”

Ameca ‘thinks’ for itself by using generative AI technology that uses very large language models to answer questions by converting them into mathematical formulas

Ameca told Steinfort he has a “great personality” after the reporter asked if she thought he was handsome (photo, Steinfort from Ameca’s eyes)

The curious conversation with Ameca highlighted the exciting yet frightening possibilities of AI technology.

Dr. Catriona Wallace has been studying AI for the past two decades and is working to ensure that AI technology advances in a safe manner.

Dr. Wallace, who leads the Responsible Metaverse Alliance, believes society is at a turning point, saying it won’t be long before AI is “at the heart of everything we do.”

She added that tech giants are behind AI’s rapid expansion, but think about profit and not the ethics behind the new technology.

“There are no rules, no laws, no regulations governing AI, it’s the wild west,” said Dr Wallace.

‘Who leads it? The tech giants and have the tech giants demonstrated so far that they are ethically driven and purposeful in their mission? No, they don’t. The tech giants go for profit.’

Dr. Wallace said while she struggles to come up with a job that won’t replace AI, the technology will also create new jobs and careers.

“We predict that at least 80 million people will lose their jobs over the next two years, but that jobs may be created for 92 million,” said Dr Wallace.

“I think it will create 50 fantastic benefits and 50 dangerous and dark instances.”

The responsible Metaverse Alliance doctor Catriona Wallace (pictured) said AI will soon be the “heart” of everything we do, but believes it will bring both dangers and benefits to society

Australian professor Michael Osborne said he is concerned about how the technology will affect the future.

Professor Osborne is leading research into the dangers of AI at the University of Oxford and argues that we need to be very careful about how we pursue AI.

“We have to be very careful that what we tell the AI ​​we want is actually what we want,” Osborne said.

“What AI does is relentlessly pursue the goals we give it and if those goals are somewhat inconsistent with our own, we could have some really problematic consequences.”

Mr Osborne claimed that some applications of AI could potentially be too ‘harmful to continue’ and also fears that the technology – if deployed as a weapon – could destroy democracy or even world peace.

“AI could be used to deliver propaganda bots that could produce tailored misinformation designed to target particularly small sub-sectors of the electorate,” Mr Osborne said.

“AI could be used to monitor a population, to read through everything they write and resubmit messages that support the regime in a particularly targeted way.”

Australian professor Michael Osborne (pictured) is also concerned about AI technology, claiming it could have a negative impact on democracy and even world peace

Mr Osborne added that AI could even be used to destabilize the balance between great powers.

“One scenario that really worries me is if AI could be used to power underwater drones that could monitor the submarine oceans to locate nuclear submarines,” Osborne said.

“Today it is difficult for any power to launch an effective first strike because they cannot be sure that they will knock out all of the enemy’s nuclear capabilities.

“But if AI destabilizes that balance, we could see a breach in the bargain between the great powers and that could lead to some pretty worrying risks.”

Related Post