RizzGPT smart glasses listen to your conversations and tell you what to say in real-time

>

Do you get confused when your Hinge date asks you about your five-year plan? Or when a job interview asks you why you want the job because you can’t say “money”?

Put that deodorant aside, because student engineers at Stanford University have come up with a solution for you: smart glasses that tell you exactly what to say.

Dubbed ‘RizzGPT’ – where ‘rizz’ refers to one’s ability to entice a romantic interest, similar to charisma – the specs display responses as text before your eyes.

A microphone picks up what a speaker has said, and the glasses use ChatGPT – the chatbot powered by artificial intelligence (AI) – to generate possible answers.

They use augmented reality (AR) technology so that the user can see the text and the person in front of them at the same time.

Stanford University student engineers have developed smart glasses (pictured), called ‘RizzGPT’, that tell you exactly what to say during awkward conversations

A microphone picks up what a speaker has said, and the glasses use ChatGPT – the chatbot powered by artificial intelligence (AI) – to generate potential answers in augmented reality

The RizzGPT goggles were unveiled Twitter last month by student Bryan Hau-Ping Chiang.

HOW DO THE GLASSES WORK?

The goggles have a monocle-like display attached to one of the lenses, which contains a camera and microphone.

The microphone detects the audio and sends it to a phone connected via Bluetooth, which then uses Whisper speech recognition software to turn it into text.

This text is passed to GPT-4, an AI chatbot, which generates a response and sends it back to the RizzGPT lens to be displayed in AR.

He wrote, “Say goodbye to awkward dates and job interviews

“We created rizzGPT – real-time Charisma as a Service (CaaS) it listens to your conversation and tells you exactly what to say next.

“Built with GPT-4, Whisper and the Monocle AR goggles.”

GPT-4 is the latest version of the ChatGPT grand language model and is even more powerful than the original, while Whisper is a speech recognition tool.

Monocle AR is a technology developed by Brilliant Labs, who also built and donated the monocle-like screen mounted on one of the lenses.

This monocle contains a camera and microphone and displays the text on top of what the wearer can see in real time.

The microphone detects the audio and sends it to a phone connected via Bluetooth, which then uses Whisper to turn it into text.

This text is passed to GPT-4, which generates a response, and it is sent back to the RizzGPT lens to be displayed in AR.

‘All this happens while the user still looks engaged + attentive in the conversation! There is no context switching,” Mr Chiang tweeted.

The developers say the glasses can recognize when a question has been asked and produce possible answers in seconds.

They are reminiscent of “Google Glass,” the head-mounted display that can display information alongside what the wearer sees.

These have now been discontinued and are widely regarded as a failure by the tech giant.

A microphone detects the audio and sends it to a phone connected via Bluetooth, which then uses Whisper to turn it into text. This text is passed to GPT-4 which generates a response, and this is sent back to the RizzGPT lens to be displayed in AR

The developers say the glasses can recognize when a question has been asked and produce possible answers in seconds

They are reminiscent of “Google Glass” (pictured), the head-mounted display that can show information in addition to what the wearer sees. These have now been discontinued and are widely regarded as a failure by the tech giant

In the demo video posted to Twitter, Stanford instructor Alix Cui speaks with student Varun Shenoy, who is wearing the RizzGPT goggles.

Mr. Cui says, “Hi Varun, I hear you’re looking for a job teaching React Native.”

After a few seconds, a response is generated and Mr. Shenoy reads it aloud.

He says, “Thanks for your interest. I have been studying React Native for the past few months and I am confident that I have the skills and knowledge required for the job.”

In addition to those nerve-racking first dates, the students hope their device can help people with social anxiety and who have difficulty speaking in public.

Mr Chiang tweeted: ‘We envision a new era of ambient computing powered by AR + AI where everyone has their own personal assistant available 24/7. it’s like God is observing your life and telling you exactly what to do next.”

He’s since updated the specs with LifeOS, a system that recognizes faces and then displays relevant information next to them based on text messages you’ve sent to each other.

“The interplay of AI and AR will redefine personal computing and help us unlock our full potential,” he tweeted.

Smart glasses turn audio into captions to help deaf people ‘SEE’ conversations

While smart glasses were once confined to the world of science fiction, brands ranging from Snapchat to Facebook have released their own devices in recent years.

Now new smart glasses have been launched for people who are deaf or hard of hearing.

Called XRAI Glass, the glasses use augmented reality to convert audio into captions that are projected directly in front of the wearer’s eyes.

“We are so proud of the ability of this innovative technology to enrich the lives of people who are deaf and hard of hearing, enabling them to maximize their potential,” said Dan Scarfe, CEO of XRAI Glass.

“Whether that means being able to carry on a conversation while continuing to make dinner or keeping a conversation going while walking with a friend.”

Read more here

Related Post