Watch a film through the eyes of a MOUSE: Scientists use AI to reconstruct its brain signals

Have you ever had a hard time describing something to your boyfriend that you saw on TV last night?

Soon you may be able to project your mental images onto the big screen, as scientists have done with mice.

A team from École Polytechnique Fédérale de Lausanne (EPFL) developed an artificial intelligence (AI) tool that can interpret the brain signals of rodents.

The algorithm, called CEBRA, is trained to assign neural activity to specific frames in videos so it can then predict and reconstruct what a mouse is looking at.

The news comes shortly after researchers at the University of Texas at Austin used AI to turn people’s thoughts into text in real time.

A team from École Polytechnique Fédérale de Lausanne (EPFL) developed an artificial intelligence (AI) tool that can interpret the brain signals of rodents. The original movie is displayed at the top, while the decoded movie is displayed at the bottom

The algorithm, called CEBRA, is trained to assign neural activity to specific frames in videos so it can then predict and reconstruct what a mouse is looking at

The algorithm, called CEBRA, is trained to assign neural activity to specific frames in videos so it can then predict and reconstruct what a mouse is looking at

Dr. Mackenzie Mathis, the study’s lead researcher, told MailOnline: ‘In the future, as CEBRA is not limited to vision, we believe it will be a powerful tool for brain-machine interfaces.

WHAT IS CEBRA?

CEBRA is a machine learning algorithm – a computer program that can improve its performance on a task by learning from data.

It featured movies watched by mice and their real-time brain activity.

CEBRA used this data to learn which brain signals belong to which frames.

It could then get some new brain activity it hadn’t encountered before, and based on that, it could predict what the mouse was looking at at the time.

The researchers were able to convert this information into a CEBRA-generated movie that could be compared to the original.

“For example, it could be used to control computer cursors in patients who are unable to move, or it could be used to provide visual sensations to the visually impaired if paired with real-time stimulation of the brain.”

“Of course I can’t fully predict this and it’s years away, but these are areas where I’m excited to see people using CEBRA.”

For the study, published today in Naturethe researchers trained CEBRA using movies watched by mice and their real-time brain activity.

Some of the activity was measured directly with electrode probes inserted into the visual cortex region of the brain.

The rest was collected using optical probes on genetically engineered mice whose neurons turn green when activated.

Using this data, CEBRA learned which brain signals are associated with which frames of a specific movie.

Then it got a new brain activity it hadn’t encountered before, from a mouse watching a slightly different preview of the movie clip.

From that it could predict in real time which frame the mouse had been looking at, and the researchers made their own movie from this data.

Dr. Matthis told MailOnline: ‘We don’t predict every pixel, but the frame.

“Probability level would be 1/900, so we think an accuracy of more than 95 percent is pretty exciting. But this pixel-wise decoding is something we’ll do next.”

In a sample video, the mouse can be seen watching a 1960s black and white film of a man running towards a car opening the trunk.

A separate screen shows what CEBRA thinks the mouse is looking at, which is an almost identical video, albeit a bit more glitchy.

In a sample video (top), the mouse can be seen watching a 1960s black-and-white film of a man running toward a car opening the trunk.  A separate screen (bottom) shows what CEBRA thinks the mouse is looking at, which is an almost identical video, albeit with a bit more glitchy

In a sample video (top), the mouse can be seen watching a 1960s black-and-white film of a man running toward a car opening the trunk. A separate screen (bottom) shows what CEBRA thinks the mouse is looking at, which is an almost identical video, albeit with a bit more glitchy

The algorithm can do this with data from just one percent of the neurons in a mouse’s visual cortex, the equivalent of about 0.5 million neurons.

“We wanted to show how little data we could use, both in terms of film clips and neural data,” Dr Mathis told MailOnline.

‘This makes it much more realistic for clinical applications in the future.

“Notably, the algorithm can run in real time, so it takes less than a second for the model to predict the entire video clip.”

The researchers say that CEBRA is not limited to just interpreting visual information from brain data.

It can also use it to predict arm movements in primates and determine where a rat is in its pen while it roams freely.

Doctor Mathis said: ‘[CEBRA] could also give us insight into how the brain processes information and could provide a platform for discovering new principles in neuroscience by combining data across animals and even species.

The researchers say that CEBRA is not limited to just interpreting visual information from brain data.  It can also use it to predict arm movements in primates and determine where a rat is in its pen as it roams free

The researchers say that CEBRA is not limited to just interpreting visual information from brain data. It can also use it to predict arm movements in primates and determine where a rat is in its pen as it roams free

“The potential clinical applications are exciting.”

A similar technology was unveiled last month by a team at Osaka University working on data from the human brain.

Their AI-powered algorithm reconstructed about 1,000 images, including a teddy bear and an airplane, from brain scans with 80 percent accuracy.

It used the popular Stable Diffusion model, included in OpenAI’s DALL-E 2, which can create any image from text input.

The researchers showed the participants individual sets of images and collected fMRI (functional magnetic resonance imaging) scans, which the AI ​​then decoded.

Similarly, scientists at the University of Texas at Austin this week unveiled a technology that converts a person’s brain activity into text.

Three study participants listened to stories while lying in an MRI machine, while an AI-powered ‘decoder’ analyzed their brain activity.

They were then asked to read another story or make up their own, after which the decoder could convert the MRI data into text in real time.

The breakthrough raises concerns about “mental privacy,” as eavesdropping on others’ thoughts could be the first step.

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works for learning.

ANNs can be trained to recognize patterns in information – including speech, text data or visual images – and have been the basis for many advances in AI in recent years.

Conventional AI uses input to “teach” an algorithm about a given subject by feeding it massive amounts of information.

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works for learning.  ANNs can be trained to recognize patterns in information - including speech, text data or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works for learning. ANNs can be trained to recognize patterns in information – including speech, text data or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software, and Snapchat’s image-changing live filters.

The process of entering this data can be extremely time consuming and is limited to one type of knowledge.

A new breed of ANNs called Adversarial Neural Networks pits the minds of two AI bots against each other, allowing them to learn from each other.

This approach is designed to accelerate the learning process and refine the output of AI systems.