The images in the bottom row were recreated from a brain scan of a person looking at those in the top row
Yu Takagi and Shinji Nishimoto/Osaka University, Japan
A tweak to a popular text-to-image-generating artificial intelligence allows it to turn brain signals directly into images. The system requires extensive training using large and expensive imaging equipment, however, so daily mind reading is far from the truth.
Many research groups have previously generated images from brain signals using high-powered AI models that required fine-tuning millions to billions of parameters.
Now, Shinji Nishimoto and Yu Takagi of Osaka University in Japan have developed a simpler method using Stable Diffusion, a text-to-image generator released by Stability AI in August 2022. Their new method includes thousands , instead of millions, of parameters.
When used normally, Stable Diffusion turns a text prompt into an image by starting with random visual noise and tweaking it to produce images similar to its training data with the same parameters. text caption.
Nishimoto and Takagi built two add-on models to make AI work with brain signals. The pair used data from four people who participated in a previous study that used functional magnetic resonance imaging (fMRI) to scan their brains while they viewed 10,000 different pictures of scenery, things and people.
Using nearly 90 percent of brain imaging data, the pair trained a model to make links between fMRI data from a region of the brain that processes visual signals, called the early visual cortex, and the images that people view.
They used the same dataset to train a second model to form links between the textual descriptions of the images – created by five annotators in the previous study – and fMRI data from the brain region processes the meaning of images, called the ventral visual cortex.
After training, these two models – which must be adapted to each individual – can translate brain-imaging data into forms that feed directly into the Stable Diffusion model. It can reconstruct about 1000 of the images viewed by people with about 80 percent accuracy, without being trained on the original images. This level of accuracy is similar to that previously achieved in a study that analyzed the same data using a more tedious method.
“I couldn’t believe my eyes, I went to the bathroom and looked in the mirror, then went back to my desk to look again,” Takagi said.
However, the study only tested the method on four people and the mind-reading AIs worked better with some people than others, Nishimoto said.
In addition, because the models must be customized to the brain of each individual, this method requires long brain scanning sessions and many fMRI machines, said Sikun Lin of the University of California. “It’s not practical for everyday use,” he said.
In the future, more practical versions of the method may allow people to create art or alter images using their imagination, or add new elements to gameplay, Lin said.
Topics: