Mind-reading AI turns your thoughts into pictures with 80% accuracy

>

Artificial intelligence can create images from text prompts, but scientists revealed a gallery of images the technology produces by reading brain activity.

The new AI-powered algorithm reconstructed about 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy.

Osaka University researchers used the popular Stable Diffusion model, included in OpenAI’s DALL-E 2, which can create any image from text input.

The team showed the participants individual sets of images and collected fMRI (functional magnetic resonance imaging) scans, which the AI ​​then decoded.

Scientists fed the AI ​​brain activity of four study participants. The software then reconstructed what it saw in the scans. The top rose shows the original images shown to participants and the bottom row shows the AI-generated images

“We show that our method can reconstruct high-resolution images with high semantic fidelity of human brain activity,” the team shared in the study published in bioRxiv.

“Unlike previous image reconstruction studies, our method does not require training or fine-tuning of complex deep-learning models.”

The algorithm pulls information from parts of the brain involved in image perception, such as the occipital and temporal lobes, according to Yu Takagi, who led the study.

The team used fMRI because it picks up on changes in blood flow in active brain regions, Science.org reports.

FMRI can detect oxygen molecules, so the scanners can see where in the brain our neurons — cranial nerve cells — are working hardest (and taking in the most oxygen) while we’re having thoughts or emotions.

A total of four participants were used in this study, each viewing a set of 10,000 images.

The AI ​​starts generating the images as noise similar to static television, which is then replaced with distinctive features the algorithm sees in the activity by referencing the images it has been trained on and finding a match.

“We demonstrate that our simple framework can reconstruct high-resolution (512 x 512) images from brain activity with high semantic fidelity,” the study said.

‘We quantitatively interpret each component of an LDM from a neuroscientific perspective by mapping specific components in different brain regions.

We present an objective interpretation of how the text to image conversion process is implemented by an LDM [a latent diffusion model] contains the semantic information expressed by the conditional text while preserving the appearance of the original image.’

Participants were shown an image and the AI ​​collected their brain activity, which it then decoded and reconstructed

Participants were shown an image and the AI ​​collected their brain activity, which it then decoded and reconstructed

The new AI-powered algorithm reconstructed about 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy.  The top rose shows the original images shown to participants and the bottom row shows the AI-generated images

The new AI-powered algorithm reconstructed about 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy. The top rose shows the original images shown to participants and the bottom row shows the AI-generated images

Another 'mind-reading' machine is able to decode brain activity when a person silently tries to spell words phonetically to make complete sentences

Another ‘mind-reading’ machine is able to decode brain activity when a person silently tries to spell words phonetically to make complete sentences

Combining artificial intelligence with brain scanners is a task for the scientific community, which they believe are new keys to unlocking our inner worlds.

In a November study, scientists used the technologies to analyze the brain waves of nonverbal, paralyzed patients and translate them into sentences on a computer screen in real time.

The “mind-reading” machine can decode brain activity as a person silently tries to spell words phonetically to make complete sentences.

Researchers at the University of California said their neuroprosthetic speech device has the potential to restore communication to people who can’t speak or type because of paralysis.

In tests, the device decoded the volunteers’ brain activity as they attempted to phonetically pronounce each letter silently to produce sentences from a vocabulary of 1,152 words at a rate of 29.4 characters per minute and an average character error rate of 6.13 percent. .

In further experiments, the authors found that the approach generalized to large vocabularies of more than 9,000 words, with an average error rate of 8.23 ​​percent.