Reading acquisition requires linking visual symbols with speech sounds, leading to the development of neural sensitivity to print. While prior studies have shown the importance of cross-modal integrat Show more
Reading acquisition requires linking visual symbols with speech sounds, leading to the development of neural sensitivity to print. While prior studies have shown the importance of cross-modal integration in spoken language areas, the higher-level visual area (lvOT) processing printed words remained more context-dependent. This longitudinal study investigated whether the lvOT undergoes cross-modal reorganization to facilitate print-speech integration during reading development and how these changes relate to reading skills. We followed children over two years, beginning at the onset of formal reading instruction. We examine lvOT responses to print-specific, speech-specific, and its convergence at whole-brain, region of interest, and voxel-based levels. Results showed that with reading experience, the initial print-specific responses in the lvOT are transformed into responses to both print and speech input. This transformation positively correlates with reading skills, especially in early stages of reading acquisition. These findings suggest that reading acquisition drives cross-modal reorganization within the lvOT, enabling the area to integrate print and speech. They shed light on the broader neural mechanisms supporting reading development. Show less
Reading relies on the ability to map written symbols with speech sounds. A specific part of the left ventral occipitotemporal cortex, known as the Visual Word Form Area (VWFA), plays a crucial role in Show more
Reading relies on the ability to map written symbols with speech sounds. A specific part of the left ventral occipitotemporal cortex, known as the Visual Word Form Area (VWFA), plays a crucial role in this process. Through the automatization of the mapping ability, this area progressively becomes specialized in written word recognition. Yet, despite its key role in reading, the area also responds to speech. This observation raises questions about the actual nature of neural representations encoded in the VWFA and, therefore, the underlying mechanism of the cross-modal responses. Here, we addressed this issue by applying fine-grained analyses of within- and cross-modal repetition suppression effects (RSEs) and Multi-Voxel Pattern Analyses in fMRI and sEEG experiments. Convergent evidence across analysis methods and protocols showed significant RSEs and successful decoding in both within-modal visual and auditory conditions, suggesting that populations of neurons within the VWFA distinctively encode written and spoken language. This functional organization of neural populations enables the area to respond to both written and spoken inputs. The finding opens further discussions on how the human brain may be prepared and adapted for an acquisition of a complex ability such as reading. Show less
Learning to read changes the nature of speech representations. One possible change consists in transforming phonological representations into phonographic ones. However, evidence for such transformati Show more
Learning to read changes the nature of speech representations. One possible change consists in transforming phonological representations into phonographic ones. However, evidence for such transformation remains surprisingly scarce. Here, we used a novel word learning paradigm to address this issue. During the learning phase, participants learned unknown words in both spoken and written forms. Following this phase, the impact of spelling knowledge on the auditory perception of the novel words was assessed at two time points through an unattended oddball paradigm, while the Mismatch Negativity component was measured by high density EEG. Immediately after the learning phase, no influence of spelling knowledge on the perception of the spoken input was found. Interestingly, one week later, this influence emerged, making similar sounding words with different spellings more distinct than similar sounding words that also shared the same spelling. Our finding provides novel neurophysiological evidence of an integration of phonological and orthographic representations that occurs once newly acquired knowledge has been consolidated. The resulting 'phonographic' representations may characterize how known words are stored in literates' mental lexicon. Show less