Auditory event-related brain potentials such as the mismatch negativity (MMN) and the frequency-following response (FFR) allow exploring speech sound encoding along the auditory pathway. Here, we coll Show more
Auditory event-related brain potentials such as the mismatch negativity (MMN) and the frequency-following response (FFR) allow exploring speech sound encoding along the auditory pathway. Here, we collected event-related brain potential (ERP) and FFR neural responses to syllables in healthy full-term newborns (N = 17, mean age = 3 days) and adults (N = 21, mean age = 22.7). Participants were passively exposed to alternating blocks of syllables presented at either fast or slow stimulation rates while we recorded electroencephalography (EEG). Specifically, blocks containing the synthetic /oa/ syllable alternated with "oddball" blocks containing three natural syllables differing in place of articulation (one standard /da/ and two deviants /ba/ and /ga/). At the FFR level, we found that 3-day-old newborns (i) exhibit an already functional encoding of vowel pitch, (ii) show an immature encoding of vowel formant structure, replicating previous observations. At the ERP level, the two deviants elicited clear MMN in the two groups, although with different topographies, suggesting an immature sensitivity to place of articulation in newborns. These results confirm the role of experience-dependent developmental factors that may differentially shape FFR and ERPs of speech sound features. Furthermore, this study highlights the feasibility of assessing the hierarchy of neural speech sound encoding in a short experimental session. Show less
Reading relies on the ability to map written symbols with speech sounds. A specific part of the left ventral occipitotemporal cortex, known as the Visual Word Form Area (VWFA), plays a crucial role in Show more
Reading relies on the ability to map written symbols with speech sounds. A specific part of the left ventral occipitotemporal cortex, known as the Visual Word Form Area (VWFA), plays a crucial role in this process. Through the automatization of the mapping ability, this area progressively becomes specialized in written word recognition. Yet, despite its key role in reading, the area also responds to speech. This observation raises questions about the actual nature of neural representations encoded in the VWFA and, therefore, the underlying mechanism of the cross-modal responses. Here, we addressed this issue by applying fine-grained analyses of within- and cross-modal repetition suppression effects (RSEs) and Multi-Voxel Pattern Analyses in fMRI and sEEG experiments. Convergent evidence across analysis methods and protocols showed significant RSEs and successful decoding in both within-modal visual and auditory conditions, suggesting that populations of neurons within the VWFA distinctively encode written and spoken language. This functional organization of neural populations enables the area to respond to both written and spoken inputs. The finding opens further discussions on how the human brain may be prepared and adapted for an acquisition of a complex ability such as reading. Show less
Learning to read changes the nature of speech representations. One possible change consists in transforming phonological representations into phonographic ones. However, evidence for such transformati Show more
Learning to read changes the nature of speech representations. One possible change consists in transforming phonological representations into phonographic ones. However, evidence for such transformation remains surprisingly scarce. Here, we used a novel word learning paradigm to address this issue. During the learning phase, participants learned unknown words in both spoken and written forms. Following this phase, the impact of spelling knowledge on the auditory perception of the novel words was assessed at two time points through an unattended oddball paradigm, while the Mismatch Negativity component was measured by high density EEG. Immediately after the learning phase, no influence of spelling knowledge on the perception of the spoken input was found. Interestingly, one week later, this influence emerged, making similar sounding words with different spellings more distinct than similar sounding words that also shared the same spelling. Our finding provides novel neurophysiological evidence of an integration of phonological and orthographic representations that occurs once newly acquired knowledge has been consolidated. The resulting 'phonographic' representations may characterize how known words are stored in literates' mental lexicon. Show less