Mark Zuckerberg says we’re ‘close’ to controlling our AR glasses with brain signals

Put an end to eye tracking and remote controls for VR headsets and AR glasses, according to Mark Zuckerberg – the company’s CEO – Meta is “close” to selling a device that can be controlled by your brain signals.

Speaking about the Daily morning brew podcast (shown below), Zuckerberg was asked to provide examples of the most impactful use cases of AI. Always keen to hype the products Meta makes – he also recently took to Instagram to explain why the Meta Quest 3 is better than the Apple Vision Pro – he started the Ray-Ban Meta Smart Glasses to discuss using AI and their camera to answer questions about what you see (although annoyingly this is still only available to some lucky users in beta form).

He then discussed “one of the wildest things we’re working on,” a neural interface in the form of a wristband. Zuckerberg also took a moment to poke fun at Elon Musk’s Neuralink, saying he wouldn’t want to put a chip in his brain until the technology matures, unlike the first human subject to get the technology implanted.

Meta’s EMG wristband can read the nervous system signals that your brain sends to your hands and arms. According to Zuckerberg, with this technology you would just be able to imagine how you want to move your hand, and that would happen virtually without requiring any major movements in the real world.

Zuckerberg previously showed off Meta’s prototype EMG wristband in a video (shown below) – although not the headset it works with – but what’s interesting about his podcast statement is that he goes on to say that he feels like Meta is almost a “product in the market”. in the coming years” that people can buy and use.

Understandably, he gives a rather vague release date and unfortunately there’s no mention of how much something like this would cost – although we’re ready for it to cost as much as one of the best smartwatches – but this system could be a big leap forward moving forward for privacy, usability and accessibility in Meta’s AR and VR technology.

The next next-generation XR development?

Currently, if you want to interact with the Ray-Ban Meta Smart Glasses via the Look and Ask feature or respond to a text message you’ve received without picking up your phone, you’ll have to talk about it. Most of the time this is fine, but there may be questions you want to ask or answers you want to send that you would rather keep private.

The EMG wristband allows you to type out these messages using subtle hand gestures, so you can maintain a higher level of privacy – although, as the podcast hosts note, this also poses problems, not least because schools have a harder time stopping students from cheating on tests. Gone are the days of secretly taking notes, it’s all about secretly adding AI to your exam.

Then there are utility benefits. While these types of wristbands would also be useful in VR, Zuckerberg has mainly talked about using them with AR smart glasses. The big draw, at least for the Ray-Ban Meta Smart Glasses, is that they’re slim and lightweight – when you look at them, they’re not noticeably different from a regular pair of Ray-Bans.

Adding cameras, sensors and a chipset for managing hand gestures can impact this sleek design. Unless you put some of this functionality and processing power into a separate device like the wristband.

The displays of the Xreal Air 2 Pro (Image credit: Future)

There should still be some changes made to the specs themselves – mainly they’ll need to have built-in displays, perhaps like the Xreal Air 2 Pro’s screens – but we’ll just have to wait to see what the next Meta smart glasses have. in store for us.

Finally, there is accessibility. By their nature, AR and VR are very physical things – you have to physically move your arms, make hand gestures and press buttons – which can make them very inaccessible for people with disabilities that affect mobility and dexterity.

These types of brain signal sensors are beginning to address this problem. Instead of having to act physically, someone could think about doing it and the virtual interface would interpret these thoughts accordingly.

Based on the demos shown so far, there’s still some movement required to use Meta’s neural interface, so it’s far from the perfect solution, but it’s the first step in making this technology more accessible and we’re curious to see where it’s going.

YOU MAY ALSO LIKE

Related Post