It was only in science fiction that one can extract and read people’s dreams, thoughts,memories and even intentions. But recently, after scientists successfully reconstructed static images from human brain activity, professor Jack Gallant and his colleagues published their approach to reconstruct natural movies from visual cortex activity in 2011. Since most of our visual experiences are dynamic and ever-changing, e.g. dreams, this approach, using functional Magnetic Resonance Imaging (fMRI) and computational model, made it closer to realize the brain-reading not only in science fiction.
Fig. 1: The left panel shows a clip of a Hollywood movie trailer which the subjects viewed in the magnetic, the right panel shows the clip reconstructed from brain activity measured by fMRI.(Credit: Jack Gallant, Shinji Nishimoto)
To improve the accuracy of brain decoding, better computational models need to be developed based on better understanding of how brain represents visual information. That is, before reading the brain, we must understand the way how brain reads. Jack Gallant’s team further addressedhow brain organizes the things we see. They identified 1705 most used English words, then showed video clips of these words to the subject in fMRI scanner and recorded the brain activity. Then the responses of around 30,000 locations of the image were analyzed by regularized linear regression analysis, representing the groups of neurons activated by specific words. Next, they used principal components analysis to find the “semantic space” that was common to all the study subjects.Finally, a language map emerged, showing locationsthat respond to each noun or verb and also how brain understands the words in relation to each other.
Fig. 2: This semantic-space map shows that categories thoughtto be semantically related (e.g. athletes and walking) are represented similarly in the brain.
(Credit: AlexanderG. Huth, Shinji Nishimoto, AnT. Vu, JackL. Gallant./Neuron)
As shown in the map, the neural activity prefers to occur in different groups, meaning that the brain likes to organize visual information by its relationship to others. For instance, people are in green and animals are in yellow. This reveals that different categories are represented not in distinct brain areas but as locations in a continuous semantic space. And this opens a new door to looking at brain data. We would believe that the better understanding of brain, the better computational models will be developed to decode brain in the future.
Nishimoto, Shinji, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L. Gallant. "Reconstructing visual experiences from brain activity evoked by natural movies." Current Biology, no. 19 (2011): 1641-1646.
Huth, Alexander G., Shinji Nishimoto, An T. Vu, and Jack L. Gallant. "A continuous semantic space describes the representation of thousands of object and action categories across the human brain." Neuron, no. 6 (2012): 1210-1224.