In this functional magnetic resonance imaging study, we investigated whether language production and understanding recruit similar phoneme-specific networks. We did so by comparing the brain's respons Show more
In this functional magnetic resonance imaging study, we investigated whether language production and understanding recruit similar phoneme-specific networks. We did so by comparing the brain's response to different phoneme categories in minimal pairs: Bilabial-initial words (eg "monkey") were contrasted to alveolar-initial words (eg "donkey") in 37 participants performing both language production and comprehension tasks. Individual-specific region-of-interest analyses showed that the same sensorimotor networks were activated across the language modalities. In motor regions, word production and comprehension elicited the same phoneme-specific topographical activity patterns, with stronger haemodynamic activations for alveolar-initial words in the tongue cortex and stronger activations for bilabial-initial words in the lip cortex. In the posterior and middle superior temporal cortex, production and comprehension likewise resulted in similar activity patterns, with enhanced activations to alveolar- compared to bilabial-initial words. These results disagree with the classical asymmetry between language production and understanding in neurobiological models of language, and instead advocate for a cortical organization where phonology is carried by similar topographical activations in motor cortex and distributed activations in temporal cortex across the language modalities. Show less
Reading relies on the ability to map written symbols with speech sounds. A specific part of the left ventral occipitotemporal cortex, known as the Visual Word Form Area (VWFA), plays a crucial role in Show more
Reading relies on the ability to map written symbols with speech sounds. A specific part of the left ventral occipitotemporal cortex, known as the Visual Word Form Area (VWFA), plays a crucial role in this process. Through the automatization of the mapping ability, this area progressively becomes specialized in written word recognition. Yet, despite its key role in reading, the area also responds to speech. This observation raises questions about the actual nature of neural representations encoded in the VWFA and, therefore, the underlying mechanism of the cross-modal responses. Here, we addressed this issue by applying fine-grained analyses of within- and cross-modal repetition suppression effects (RSEs) and Multi-Voxel Pattern Analyses in fMRI and sEEG experiments. Convergent evidence across analysis methods and protocols showed significant RSEs and successful decoding in both within-modal visual and auditory conditions, suggesting that populations of neurons within the VWFA distinctively encode written and spoken language. This functional organization of neural populations enables the area to respond to both written and spoken inputs. The finding opens further discussions on how the human brain may be prepared and adapted for an acquisition of a complex ability such as reading. Show less