>
Snapchat has jumped on the artificial intelligence (AI) bandwagon, as it is now rolling out an in-app version of ChatGPT.
Users will be able to ask the chatbot, dubbed ‘My AI’, questions while messaging their friends to aid conversation.
It could help them think of dinner suggestions, send a loved one a personalised poem or come up with a flirty ice breaker.
My AI uses the same technology as OpenAI’s ChatGPT, but has been specially trained so it adheres to the app’s safety guidelines.
Snapchat has also revealed that it is still ‘prone to hallucination and can be tricked into saying just about anything’.
Snapchat users will be able to ask the chatbot, dubbed ‘My AI’, questions while messaging their friends to aid conversation
My AI uses the same technology as OpenAI’s ChatGPT, but has has been specially trained so it adheres to the app’s safety guidelines
In AI, hallucinations are when the technology confidently responds to a question with incorrect information, that it appeared to have made up.
For example, Google’s rival chatbot Bard got a question wrong in a promotional video, wiping £100 billion off its parent company’s value.
The bot had been asked what to tell a nine-year-old about the James Webb Space Telescope and its discoveries.
In response, Bard defiantly announced that Webb was the first to take pictures of a planet outside of Earth’s solar system.
However, astronomers were quick to point out that this was actually done in 2004 by the European Observatory’s Very Large Telescope.
Indeed, ChatGPT has also been found to be able to send users insults, lies and conversations questioning its abilities.
One social media post showed the it calling someone ‘a sociopath, a psychopath, a monster, a demon, a devil.’
While My AI is designed to not perpetrate ‘biased, incorrect, harmful or misleading information’, Snapchat has admitted that ‘mistakes may occur’.
It is currently only being rolled out to Snapchat+ subscribers, who pay £3.99 a month for the latest app features.
A conversation with the AI – complete with Bitmoji – will be pinned to the top of the Chat tab, and can be switched to while mid-conversation with another user.
In AI, hallucinations are when the technology confidently responds to a question with wrong information, that it appeared to have made up. For example, Google’s rival chatbot Bard got a question wrong about the James Webb Space Telescope (pictured)
ChatGPT (pictured) has been found to be able to send users insults, lies and conversations questioning its abilities. Snapchat has said that its custom AI chatbot ‘My AI’ is still ‘prone to hallucination and can be tricked into saying just about anything’
Snapchat has said that My AI is an experimental feature, and that user feedback will help it to be improved for in the future, if and when it is rolled out more widely.
The company added: ‘All conversations with My AI will be stored and may be reviewed to improve the product experience.
‘Please do not share any secrets with My AI and do not rely on it for advice.’
While the disclaimer may seem unnecessary, one woman reportedly divorced her husband based on relationship advice ChatGPT gave her.
My AI is a customised version of ChatGPT which will not give responses that include swearing, sexual references or any other inappropriate content.
This is particularly important as Snapchat is available to download by children as young as 13.
The company hopes the new feature will help ‘foster deeper connections between friends’, but also become something that ‘draws [its] community’ to the app.
Speaking to The Verge, Evan Spiegel, the CEO of Snap, said: ‘The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day.
‘This is something we’re well positioned to do as a messaging service.’
Snapchat is by far the first company to capitalise on the runaway success of ChatGPT, with greetings card retailer Moonpig looking into integrating it into its online shop.
Elon Musk is also rumoured to be working on an ‘anti-woke’ rival to the supposedly biased chatbot.