Could AI someday read minds? Japanese breakthrough sparks debate

Tokyo, Japan – Yu Takagi couldn’t believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he saw on a screen.

“I still remember when I saw the first one [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went to the bathroom and looked at myself in the mirror and saw my face and thought, ‘Okay, that’s normal. Maybe I’m not going crazy’.”

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyze subjects’ brain scans that showed up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI ​​could do this despite the images not being shown beforehand or trained in any way to produce the results.

“We really didn’t expect this kind of result,” Takagi said.

Takagi stressed that the breakthrough does not represent mind reading at this point — the AI ​​can only produce images that a person has viewed.

“This isn’t mind reading,” Takagi said. “Unfortunately, there are many misunderstandings about our research.”

“We cannot decipher imaginings or dreams; we think this is too optimistic. But there is of course potential in the future.”

But the development has nonetheless raised concerns about how such technology might be used in the future.

Despite his excitement, Takagi himself acknowledges that such fears are not unfounded, given the possibility of abuse by people with malicious intent or without consent.

“For us, privacy issues are the most important. If a government or institution can read people’s minds, it’s a very sensitive issue,” Takagi said. “There needs to be high-level discussions to make sure this can’t happen.”

Yu Takagi and his colleague developed a method to use AI to analyze and visually display brain activity [Yu Takagi]

Takagi and Nishimoto’s research caused quite a stir in the tech community, which has been electrified by breakneck advances in AI, including the release of ChatGPT, which produces human-like speech in response to a user’s prompts.

Their paper describing the findings ranks in the top 1 percent for engagement out of more than 23 million research outputs tracked to date, according to Altmetric, a data company.

The study has also been accepted at the Computer Vision and Pattern Recognition (CVPR) Conference, scheduled for June 2023, a common route for legitimizing major breakthroughs in neuroscience.

Still, Takagi and Nishimoto are careful not to get carried away by their findings.

Takagi claims there are two primary bottlenecks to true mind reading: brain scan technology and AI itself.

Despite advances in neural interfaces — including brain computers with electroencephalography (EEG), which detect brain waves via electrodes attached to a subject’s head, and fMRI, which measures brain activity by detecting changes in blood flow — scientists believe we may be decades away from being able to accurately and reliably decode imagined visual experiences.

Sri
Yu Takagi and his colleague used an MRI to scan subjects’ brains for their experiment [Yu Takagi]

In Takagi and Nishimoto’s study, subjects had to sit in an fMRI scanner for up to 40 hours, which was both costly and time-consuming.

In a 2021 paper, researchers at the Korea Advanced Institute of Science and Technology noted that conventional neural interfaces “chronically lack recording stability” due to the soft and complex nature of neural tissue, which reacts in unusual ways when brought into contact with synthetic interfaces. .

Further, the researchers wrote: “Current recording techniques generally rely on electrical pathways to transfer the signal, which is sensitive to environmental electrical noise. Because the electrical noise significantly distorts the sensitivity, it is not yet easy to achieve fine signals from the target area with high sensitivity.”

Current AI limitations are a second bottleneck, though Takagi acknowledges that these capabilities are increasing by the day.

“I’m optimistic for AI, but I’m not optimistic for brain technology,” Takagi said. “I think this is the consensus among neuroscientists.”

Takagi and Nishimoto’s framework could be used with brain-scanning equipment other than MRI, such as EEG or hyperinvasive technologies such as the brain-computer implants being developed by Elon Musk’s Neuralink.

Still, Takagi believes there is currently little practical application for his AI experiments.

To begin with, the method is not yet transferable to new subjects. Because the shape of the brain varies from person to person, you cannot apply a model made for one person directly to another.

But Takagi sees a future where it could be used for clinical, communicative or even entertainment purposes.

“It’s hard to predict what a successful clinical application might be at this stage, as it’s still very exploratory research,” said Ricardo Silva, a professor of computational neuroscience at University College London and a research fellow at the Alan Turing Institute. , to Al Jazeera.

“This may prove to be an additional avenue to develop a marker for Alzheimer’s detection and progression evaluation by assessing ways to spot persistent abnormalities in visual navigation task images reconstructed from a patient’s brain activity.”

ftbt
Some scientists believe that in the future, AI could be used to detect diseases such as Alzheimer’s [Yu Takagi]

Silva shares his concerns about the ethics of technology that could one day be used for true mind reading.

“The most pressing issue is to what extent the data collector should be compelled to disclose in full detail the use of the collected data,” he said.

“It’s one thing to sign up as a way to take a snapshot of your younger self for, perhaps, future clinical use… It’s quite another to have it used for secondary tasks like marketing , or worse, for use in lawsuits against one’s self-interest.”

Still, Takagi and his partner have no intention of slowing down their investigation. They are already planning version two of their project, which will focus on improving the technology and applying it to other modalities.

“We are now developing a much better one [image] reconstruction technique,” Takagi said. “And it’s happening at a very fast pace.”