In this functional magnetic resonance imaging study, we investigated whether language production and understanding recruit similar phoneme-specific networks. We did so by comparing the brain's respons Show more
In this functional magnetic resonance imaging study, we investigated whether language production and understanding recruit similar phoneme-specific networks. We did so by comparing the brain's response to different phoneme categories in minimal pairs: Bilabial-initial words (eg "monkey") were contrasted to alveolar-initial words (eg "donkey") in 37 participants performing both language production and comprehension tasks. Individual-specific region-of-interest analyses showed that the same sensorimotor networks were activated across the language modalities. In motor regions, word production and comprehension elicited the same phoneme-specific topographical activity patterns, with stronger haemodynamic activations for alveolar-initial words in the tongue cortex and stronger activations for bilabial-initial words in the lip cortex. In the posterior and middle superior temporal cortex, production and comprehension likewise resulted in similar activity patterns, with enhanced activations to alveolar- compared to bilabial-initial words. These results disagree with the classical asymmetry between language production and understanding in neurobiological models of language, and instead advocate for a cortical organization where phonology is carried by similar topographical activations in motor cortex and distributed activations in temporal cortex across the language modalities. Show less
The "MEG-GLOUPS" dataset offers a curated collection of raw magnetoencephalography recordings from seventeen French participants engaged in a pseudoword learning task as well as resting-state activity Show more
The "MEG-GLOUPS" dataset offers a curated collection of raw magnetoencephalography recordings from seventeen French participants engaged in a pseudoword learning task as well as resting-state activity before and after the task. A dataset called Gloups with the same participants and a similar learning task adapted to functional magnetic resonance imaging is already available. In the learning task, participants were instructed to pronounce monosyllabic pseudowords, which were presented both visually and auditorily. These pseudowords were either phonotactically legal or illegal in the participants' native language, French. We organized the dataset according to the Brain Imaging Data Structure (BIDS), pre-processed the data and performed a minimal analysis of Event-Related Fields (ERFs), to ensure data quality and integrity of the dataset. This data collection includes comprehensive descriptions of the theoretical background, methods, data recordings, and technical validation. Show less
This study aimed to assess the extent to which human participants co-represent the lexico-semantic processing of a humanoid robot partner. Specifically, we investigated whether participants would enga Show more
This study aimed to assess the extent to which human participants co-represent the lexico-semantic processing of a humanoid robot partner. Specifically, we investigated whether participants would engage their speech production system to predict the robot's upcoming words, and how they would progressively adapt to the robot's verbal behaviour. In the experiment, a human participant and a robot alternated in naming pictures of objects from 15 semantic categories, while the participant's electrophysiological activity was recorded. We manipulated word frequency as a measure of lexical access, with half of the pictures associated with high-frequency names and the other half with low-frequency names. In addition, the robot was programmed to provide semantic category labels (e.g., "tool" for the picture of a hammer) instead of the more typical basic-level names (e.g., "hammer") for items in five categories. Analysis of the stimulus-locked activity revealed a comparable event-related potential (ERP) associated with word frequency both when it was the participant's and the robot's turn to speak. Analysis of the response-locked activity showed a different pattern for the category and basic-level responses in the first but not in the second part of the experiment, suggesting that participants adapted to the robot's lexico-semantic patterns over time. These findings provide empirical evidence for two key points: (1) participants engage their speech production system to predict the robot's upcoming words and (2) partner-adaptive behaviour facilitates comprehension of the robot's speech. Show less