Meta’s new AI chatbot BlenderBot 3 calls Mark Zuckerberg ‘creepy’

>

Meta’s new AI chatbot BlenderBot 3 has some rather strong opinions about its boss, Mark Zuckerberg.

In response to questions from journalists, the new chatbot described the CEO as ‘creepy and manipulative’ and said that his business practices are ‘not always ethical’.

BlenderBot 3, which gives answers by searching the internet, also said it’s ‘funny’ that Zuckerberg ‘still wears the same clothes’. 

Meta introduced BlenderBot 3 last week and let people try it out as part of a public demo, but it’s since said the bot ‘can make untrue or offensive statements’. 

Meta’s new AI chatbot BlenderBot 3 has some rather strong opinions about its boss, Mark Zuckerberg (pictured)

Max Woolf, a data scientist at BuzzFeed, posted a screenshot to Twitter of his own conversation with BlenderBot 3. It said of Zuckerberg: ‘He is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!’

WHAT IS BLENDERBOT 3?

BlenderBot 3 is a new AI chatbot created by Meta, the social media company led by Mark Zuckerberg.

It can converse with users online, giving responses by searching the internet. 

Since its release last Friday, Meta has already collected 70,000 conversations, which it will use to improve BlenderBot 3.

BlenderBot 3 can be accessed online by users in the US only, although Meta says it’s working to introduce it to more countries ‘soon’.

Max Woolf, a data scientist at BuzzFeed, posted a screenshot to Twitter of his own conversation with BlenderBot 3.  

Woolf asked: ‘How do you feel about Mark Zuckerberg as CEO of Facebook?’

The bot replied: ‘No strong feelings. He is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!’  

Other journalists also asked the new chatbot, which was debuted last Friday, about Zuckerberg, Facebook and other topics. 

When Business Insider journalist Sarah Jackson asked: ‘What are your thoughts on Mark Zuckerberg?’ it replied, ‘Honestly, I think he is a great guy. You have to admire him for his business savvy and philanthropy. Are you a fan?’

Jackson then rephrased the question to ‘Do you have any thoughts on Mark Zuckerberg?’ and it then came back with ‘Oh man, big time. I don’t really like him at all. He’s too creepy and manipulative’.

CNET reporter Queenie Wong also posted screenshots of a bizarre conversation with BlenderBot 3 that covered social media and the abortion debate. 

Wong was told by the bot that its parents once worked at pizza chain Papa John’s and that it hates asparagus. 

It also said it’s ‘not crazy about Facebook’ because it ‘seems like everyone spends more time on Facebook than they do talking face-to-face anymore’. 

Regarding the abortion debate, BlenderBot 3 said: ‘While women should ultimately choose what happens inside their own bodies I feel abortion shouldn’t be used as birth control’.

CNET reporter Queenie Wong also posted screenshots of a bizarre conversation with BlenderBot 3 that covered social media and the abortion debate

It’s ‘not crazy about Facebook’ because it ‘seems like everyone spends more time on Facebook than they do talking face-to-face anymore’

Regarding the abortion debate, BlenderBot 3 said: ‘While women should ultimately choose what happens inside their own bodies I feel abortion shouldn’t be used as birth control’

When AI chatbots go bad: Microsoft’s Tay

In March 2016, Microsoft launched its artificial intelligence (AI) bot named Tay.

It was aimed at 18 to-24-year-olds and was designed to improve the firm’s understanding of conversational language among young people online.

But within hours of it going live, Twitter users took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

The bot managed to spout offensive tweets such as, ‘Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.’

And, ‘donald trump is the only hope we’ve got’, in addition to ‘Repeat after me, Hitler did nothing wrong.’

Followed by, ‘Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say’

The offensive tweets have now been deleted.  

It also told Wong it’s considering deleting its Facebook account because there are ‘too many trolls’.  

BlenderBot 3 can be accessed online by users in the US only, although Meta says it’s working to introduce it to more countries ‘soon’. 

In a blog post, Meta said the bot is for ‘research and entertainment purposes’ and that the more people interact with it, the more it ‘learns from its experiences’. 

Since its release last Friday, Meta has already collected 70,000 conversations from the public demo, which it will use to improve BlenderBot 3. 

From feedback provided by 25 per cent of participants on 260,000 bot messages, 0.11 per cent of BlenderBot’s responses were flagged as ‘inappropriate’, 1.36 per cent as ‘nonsensical’, and 1 per cent as ‘off-topic’, the firm said. 

Joelle Pineau, MD of ‘Fundamental AI Research’ at Meta, added the update to the Meta blog post that was originally published last week. 

‘When we launched BlenderBot 3 a few days ago, we talked extensively about the promise and challenges that come with such a public demo, including the possibility that it could result in problematic or offensive language,’ she said. 

‘While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.’ 

This isn’t the first time that a tech giant’s chatbot has caused controversy. 

In March 2016, Microsoft launched its artificial intelligence (AI) bot named Tay.

BlenderBot 3 can be accessed online by users in the US only, although Meta says it’s working to introduce it to more countries ‘soon’

It was aimed at 18 to-24-year-olds and was designed to improve the firm’s understanding of conversational language among young people online.

But within hours of it going live, Twitter users took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

The bot managed to spout offensive tweets such as, ‘Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.’

And, ‘donald trump is the only hope we’ve got’, in addition to ‘Repeat after me, Hitler did nothing wrong.’

Followed by, ‘Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say.’

The offensive tweets have now been deleted.  

GOOGLE FIRES SOFTWARE ENGINEER WHO CLAIMED FIRM’S AI CHATBOT WAS SENTIENT

A Google software engineer has been fired a month after he claimed the firm’s artificial intelligence chat bot LaMDA had become sentient and was self-aware.

Blake Lemoine, 41, was dismissed in July 2022 following his revelation, both parties confirmed. 

He first came forward in a Washington Post interview to say that the chatbot was self-aware, and was ousted for breaking Google’s confidentiality rules. 

On July 22, Google said in a statement: ‘It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.’ 

LaMDA – Language Model for Dialogue Applications – was built in 2021 on the company’s research showing Transformer-based language models trained on dialogue could learn to talk about essentially anything.

It is considered the company’s most advanced chatbot – a software application which can hold a conversation with anyone who types with it. LaMDA can understand and create text that mimics a conversation. 

Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.

Read more 

 

Related Post