Neuroimaging reveals hidden communication between brain layers during reading

Language involves many different regions of the brain. Researchers from the Max Planck Institute for Psycholinguistics and the Donders Institute at Radboud University discovered previously hidden connections between brain layers during reading, in a neuroimaging study reported in PNAS.

How is a language represented in the brain? This question is challenging, as language involves many regions throughout the brain that interact in a dynamic way. For instance, when people read a word, they combine ‘bottom-up’ (lower level) visual information to recognise the letters, and ‘top-down’ (higher level) cognitive information to recognise the word and retrieve its meaning from memory. Such top-down and bottom-up information streams are notoriously difficult to measure noninvasively (without having to open up the brain).

A research team led by Daniel Sharoh from the Donders Center for Cognitive Neuroimaging at Radboud University Nijmegen, Kirsten Weber (Radboud University, MPI), David Norris (Radboud University, MPI), and Peter Hagoort (Radboud University, MPI) wanted to investigate the brain’s reading network at a more fine-grained level. They used the 7 Tesla MRI at the Erwin L Hahn Institute in Essen for laminar functional Magnetic Resonance Imaging (lfMRI), to measure brain activation at different depths or ‘layers’ of the brain—typically right next to each other and smaller than a millimetre. Measuring at this level is important, as the layers can be related to the direction of the signals. Deep layers are associated with top-down information, whereas middle layers are associated with bottom-up information. Only laminar fMRI is sensitive enough to detect the deeper layers of the brain. With this new technique, would the investigators be able to find a top-down flow of information to the deeper layers of the brain for word reading?

To answer this question, the researchers created pseudowords such as “rorf” and “bofgieneer,” to be compared with real words such as “zalm” (salmon) and “batterij” (battery). Pseudowords are ‘possible’ words that happen not to exist; they are pronounceable and therefore ‘readable.” Twenty-two native Dutch speakers were asked to silently read the words and pseudowords as their brains were being scanned. The participants also viewed ‘unreadable’ sequences of invented ‘false font’ characters that resembled existing letters. The task was to only press a button when the items were real words.

By comparing the brain activation for ‘readable’ items (words and pseudowords) and ‘unreadable’ items (false font), the investigators could isolate the ‘reading area’ of the brain. This area is also known as the ‘visual word form area’ (VWFA) and is situated in the temporal lobe (the left occipitotemporal cortex). As a next step, the researchers compared words directly to pseudowords, to further explore the VWFA. Bottom-up sensory information is needed for both types of items, to recognise the strings as letters. But would top-down information from language areas be visible as well, needed to distinguish words from pseudowords?

The researchers found stronger activation for words than pseudo-words in the deep layers of the VWFA. This activation was caused by top-down projections from higher language areas of the brain (the left Middle Temporal Gyrus (lMTG) and left posterior middle temporal gyrus (lpMTG)). These are well-known language areas involved in retrieving words and their meaning. In contrast, the researchers found decreased activation in the middle layer of the reading area, indicating that the deep layer ‘suppresses’ activation of the middle layer during word reading. A conventional fMRI would have missed this nuance, as only laminar fMRI is sensitive to layer-specific activation.

Source: Read Full Article