Navigation – Plan du site

AccueilNuméros38ArticlesShared or different: How linked a...

Articles

Shared or different: How linked are word production and comprehension?

Entre processus partagés et différents : A quel point la production et la compréhension du langage sont-elles liées ?
Amie Fairs, Raphaël Fargier et Kristof Strijkers

Résumés

Comment nous produisons et percevons les mots reste l’une des questions majeures de la recherche sur le langage, tant les mots sont essentiels pour échanger sur le monde. Alors que cela peut sembler simple, produire et comprendre les mots sont des tâches complexes impliquant de multiples étapes. Lorsque nous voulons dire quelque chose, il nous faut transformer une idée en un « mot » concret, et informer nos cordes vocales comment produire les sons dans un ordre particulier. Pour comprendre un mot, il nous faut segmenter le flux continu de parole en « mots » que l’on connaît. Aujourd’hui, il existe un débat entre chercheurs sur comment ces différentes étapes sont organisées, et à quel point elles sont similaires entre production et compréhension. Dans cet article, nous nous concentrons sur cette question, notamment combien les représentations linguistiques sont partagées entre production et compréhension.
Nous commençons par présenter les différentes étapes de traitement impliquées dans chaque comportement langagier. En général, on distingue trois différents niveaux de traitement linguistique : le niveau sémantique (i.e. le sens du mot), le niveau phonologique (i.e. la forme sonore du mot) et le niveau articulatoire ou auditif (i.e. comment les mots sont produits, ou décodés). Pour chaque niveau, il y a différentes façons d’aborder la question des représentations partagées.
Avant d’entrer dans le vif du sujet, il est d’ailleurs fondamental de clarifier les termes « représentation » et « partagé », même s’il est vrai que définir « représentation » est difficile à cause a) des ambiguïtés dans ce que les chercheurs mettent derrière ce mot, et b) des limites techniques actuelles dans ce que nous pouvons mesurer. Pour certains chercheurs, la représentation réfère à une image holistique du mot qui a une « place » concrète dans le cerveau, pour d’autres la représentation est construite « en ligne » pendant la communication, et il pourrait bien s’agir d’une combinaison des deux idées. Sans définition claire, tester dans quelle mesure des représentations sont partagées devient un défi. Une définition stricte d’une représentation partagée mise en jeu dans le cerveau imposerait que les mêmes neurones déchargent pour la même raison, au même moment, dans la production et la compréhension d’un mot. Compte-tenu de l’absence d’une technologie adéquate pour mesurer cela, nous nous reposons sur une autre approche. Notre approche se décline en trois actes : les représentations du mot sont partagées si 1) elles partagent le même décours temporel d’activation selon les différents niveaux de traitement, 2) les mêmes régions cérébrales sont recrutées pour les traitements linguistiques en production et compréhension, et enfin si 3) lorsque la production et la compréhension sont mises en jeu, les deux tâches interfèrent l’une avec l’autre, et que l’on peut associer cette interférence à des niveaux de traitements spécifiques.
En lien avec notre première approche, le décours temporel du traitement linguistique, les données empiriques ne sont pas concluantes. La question principale dans ce cadre est de savoir si les différents niveaux linguistiques sont accédés simultanément ou de façon séquentielle, par exemple si le sens du mot est accessible avant les sons du mot, ou si tous ces éléments du mot sont disponibles en même temps. Certaines études ont montré un patron d’activation compatible avec une activation séquentielle des niveaux linguistiques (Dufour et al., 2013 ; Salmelin et al., 1994), mais des études plus récentes, notamment conduites par notre équipe, suggèrent que tous les niveaux de traitement sont activés simultanément de façon précoce. Par exemple, nous avons montré à l’aide de l’EEG que les informations lexicales et phonémiques étaient activées simultanément en production et en compréhension (Fairs et al., 2021). Ces données suggèrent que les représentations sous-tendant la production et la compréhension sont partagées.
En lien avec notre deuxième approche, à savoir si les régions cérébrales sous-tendant la production et la compréhension sont les mêmes, encore une fois les données empiriques ne sont pas concluantes. Certains modèles impliquent que différentes aires du cerveau soient impliquées dans la production et la compréhension, les aires temporales seraient impliquées dans les deux activités, mais les aires frontales uniquement en production (Hickok & Poeppel, 2007). D’autres modèles impliquent que les mêmes représentations soient utilisées en production et compréhension et donc que les mêmes régions cérébrales soient activées (Pulvermüller, 1999 ; Strijkers et al. 2017). Dans le cadre d’une étude préenregistrée qui est en cours, Fairs et al. (Fairs et al., 2020) visent à adresser cette question en testant si des aires spécifiques du cerveau connues pour être activées dans le traitement de certaines catégories sémantiques sont actives aussi bien en production qu’en compréhension. Cette étude a le potentiel pour réellement démontrer combien la production et la compréhension du langage sont interconnectées.
En lien avec notre troisième approche, nous décrivons un ensemble d’expériences en double-tâche qui ont testé si la production et la compréhension du langage pouvaient être mises en œuvre simultanément, et/ou dans quelle mesure elles interféraient l’une avec l’autre. Fargier et Laganaro (Fargier & Laganaro, 2016 ; 2019) ont montré une interférence lorsque les participants écoutent des syllabes en même temps qu’ils produisent des mots, et notamment souligné que l’étape de l’encodage de la forme sonore des mots était partagée entre production et compréhension. Fairs et collègues (Fairs, 2019) ont observé un plus large chevauchement du traitement entre production et compréhension suggérant que non seulement l’étage d’encodage de la forme du mot, mais d’autres niveaux linguistiques sont partagés entre les deux activités langagières. D’autres études ont cependant suggéré que le système linguistique peut s’adapter de façon flexible de telle sorte que la question n’est plus s’il y a un chevauchement entre les représentations, mais quand ce chevauchement intervient. Dans la section finale de l’article, nous encourageons les approches paradigmatiques actuelles qui visent à étudier les mots en contexte et soulignons quelques-unes des questions qu’il reste à aborder.
Les mots sont à la base de nos conversations, et comprendre comment nous créons et maintenons des conversations reste l’un des objectifs ultimes de la recherche en psycholinguistique. Les recherches actuelles dans le domaine de la conversation suggèrent qu’il y a un large chevauchement des processus de production et compréhension chez une personne (Menenti et al., 2011) et entre personnes (Silbert et al., 2014). Néanmoins, il nous apparaît évident que seule davantage de recherches sur les fondations du langage – les mots – nous permettra de savoir comment la production et la compréhension du langage fonctionnent, et combien les représentations et processus impliqués sont partagés entre ces deux activités.

Haut de page

Texte intégral

Acknowledgements
Research in this article was supported by grants from the ‘Agence National de la Recherche’ ANR-16-CE28-0007-01 and ANR-18-FRAL-0013-01 to K. S., the Excellence Initiative of Aix-Marseille University (A*MIDEX) to R.F., the Max Planck Institute for Psycholinguistics to A.F. and additional support by an ANR grant awarded to the Institute of Language, Communication and Brain (ILCB; ANR-16-CONV-0002).

1How people produce and perceive words is one of the core questions in language research. Words are the building blocks of language, as without words we cannot build sentences. At first glance it seems that words should be easy to produce or understand, as we can say a word within half a second of thinking of it, and we understand words even faster. However, the process of producing or understanding a word is actually very complex, and remains a source of debate (as described later in the article; see also de Zubicaray & Piai (2019); Meyer et al. (2016); Strijkers & Costa, (2016)). For example, if a person sees a cow in a field and wants to refer to it, multiple different processes need to happen before the word ‘cow’ comes from the person’s mouth. The person needs to recognise that the cow is a cow, and retrieve, from their long-term memory, the semantic, grammatical, phonological, and phonetic information about the word ‘cow’. To say this aloud the speaker converts this abstract information into muscle and vocal commands. There is a similar chain of processes involved for the person listening to the word ‘cow’, yet intuitively they should happen in a reverse order: the soundwave that comes from the speaker’s mouth travels into the ears of the listener and they need to convert the air fluctuations into phonological, lexical and semantic representations which are associated with the word ‘cow’ to understand what the speaker is saying.

2Despite the similarity of the steps to speak or listen to the word ‘cow’, research has tended to focus on either language production or language comprehension (Price, 2012). In recent years, this trend has changed and there is a greater focus on studying production, comprehension, and how they work in concert, and we will review accumulating evidence suggesting that production and comprehension are strongly linked. How strongly they are linked though remains an open question and an area of active research; this is the focus of this article. Before diving into the research on to what extent word production and comprehension are linked, we will first describe current theories of language production, comprehension, and their interaction.

Theories of word production and comprehension

3As explained above with the cow example, there are a number of stages for word production which need to be passed through to go from the thought of a word to actually pronouncing that word. Figure 1 displays these stages in greater detail. Note that to some degree the stages pass in reverse for word comprehension, and this is further detailed below. In fact, how linguistic processes unfold in time in comprehension and production have shaped theoretical proposals, with models traditionally emphasizing a hierarchical architecture. We start with this traditional assumption and later examine alternative proposals.

4Firstly, there must be activation (and selection) of the conceptual idea to be uttered. In our ‘cow’ example, this refers to activating the conceptual and semantic information relating to cows, which could include that cows are animals, cows have 4 legs, cows could be black and white, cows eat grass, etc. Then an abstract lexical form of the word ‘cow’ is selected, which contains grammatical information. This abstract form needs to be converted to an abstract phonological form of ‘cow’, where there is abstract information about the sounds that make up the word ‘cow’. The next stage is phonetic encoding, where the abstract sound forms of /cow/ are converted into concrete information the brain can use to produce output; here, the phonemes that need to be produced, /kaʊ/. If the phonemes are common, they may be retrieved from a syllabary, which is akin to a dictionary of common syllables in a language (Bürki et al., 2015; Levelt & Wheeldon, 1994). Otherwise, each phoneme is retrieved and assembled into a word, along with the information required to produce that word aloud. The final stage is articulation, where the word ‘cow’ is produced by the larynx, vocal cords, mouth, and muscles. The process of self-monitoring is also happening at every stage, to ensure that no mistakes are made. For example, the production system can monitor whether the wrong lexical form has been selected, as would be the case if the system selected ‘dog’ instead of ‘cow’, or whether incorrect phonemes are selected, as would be the case if ‘pow’ rather than ‘cow’ was selected.

Figure 1: A unified view of comprehension and production processes

Figure 1: A unified view of comprehension and production processes

The figure illustrates similarities and differences between production processes (left) and comprehension processes (right). Similar stages are required in both language processes and include semantic (blue square), lexical (pink square), phonological (green square) and monitoring (orange square). Production and comprehension differ in that a soundwave is the output of articulation in production (left panel) and the input in comprehension. The flow of information is assumed to be reversed in production and comprehension, with the main flow of information going from semantic to phonological in production and from phonological to semantic in comprehension (dark blue arrows). Cascading and parallel processes are indicated by concurrent light blue arrows. Middle panel: Theoretical models make different assumptions (e.g. separation vs. integration) about the degree of overlap in processes and representations involved in production and comprehension. Different psycholinguistic and neural predictions can be formulated to characterize shared vs. different word representations and processes.

5For word comprehension, similar stages are required in reverse, but with some changes relating to the difference in the sensory input. Spoken words are ‘heard’ by our ears, where differences in sound pressure moving through the air travel from the speaker’s mouth to the listener’s ears, and pass through the auditory canal. These sounds are heard as linguistic input (Hamilton et al., 2021) and matched to phonemes. The phonemic form of the word activates phonological, morphemic, and lexical information, which finally maps onto semantic memory; thus, the features of these stages are quite similar.

6As previously mentioned, two main groups of theories account for these processes in slightly different ways, with both psycholinguistic and neuroimaging support for both theories. One group of theories posits passing through each of these stages in a sequential manner, where higher-level stages (i.e., meaning-related) must be active before lower-level stages (i.e., sound-related). Sequential theories for production (Dell, 1984; Indefrey, 2011; Indefrey & Levelt, 2004; Levelt et al., 1999) thus argue that semantic representations are active before phonological representations (in a sense, activation flows down from conceptualising a word to producing that word). For comprehension (Hauk, 2016; Hickok & Poeppel, 2004, 2007), phonological representations must precede semantic representations. While activation can cascade through the system, such that as long as there is some activation at a higher-level stage this can ‘cascade’ to the next stage even if the prior stage is not complete (Dell, 1984), the commonality between all of these theories is that the stages are passed through sequentially, with information at higher-level stages available before information at lower-level stages. Sequential theories also posit that word production and comprehension use only partly overlapping regions of the brain (i.e., different areas of the brain are involved in each language behaviour). These assumptions come from a long-standing tradition originating in the 19th century to study language from aphasic individuals, with impairments in production or in comprehension being associated with lesions in different parts of the brain (Tremblay & Dick, 2016). For example, word production is seen to broadly encompass areas in frontal and temporal brain regions. However, word comprehension largely activates areas in temporal regions, and not in frontal regions. Therefore, the brain areas involved in production and comprehension are thought to be different, which would suggest processes and representations are in some way different between the two language behaviours.

7In contrast, a different set of theories, relying more on accumulating evidence from neural systems, posit that each stage can be accessed in parallel. These parallel integration theories for both production and comprehension (Pulvermüller, 2018; Pulvermüller, 1999; Strijkers, 2016a, 2016b; Strijkers & Costa, 2016) argue that all stages can be accessed simultaneously. There is no serial nature to processing where activation must be ‘transferred’ from one process to another. Instead, activation can spread through the system to all stages in parallel. These theories follow Hebbian-based learning explanations of how neural activation in the brain works (Braitenberg, 1978; Hebb, 1949) following the easy to understand and popular phrase ‘what fires together, wires together’. Accordingly, because words are learnt as units (that is, the meaning and sounds of the word are linked together in time) and neurons involved in the word’s representation in the brain are linked, when some activation is present in the neural network representing that word, the whole network is active.

8These parallel integration theories also posit that the same brain regions are involved in word production and comprehension. They argue that areas in frontal and temporal brain regions are involved in both word production and word comprehension, in contrast to sequential models. This is because, as stated above, words are learnt as units which link the sound and the meaning of the word. When a person produces a word, they also hear the word, which means that speaking triggers hearing, which results in sensory and motor information being grouped together. This sensory and motor information is located in fronto-temporal regions. Therefore, because the same word representations are involved in production and comprehension, words are represented in fronto-temporal distributed circuits.

9Overall, there are two big differences between sequential and parallel integration models. The first is the time course of processing. Sequential models argue that processes occur sequentially, whereas parallel models argue that they are activated simultaneously. The second is the location of processing. Sequential models argue that, to some extent, non-overlapping regions of the brain are involved in word production and comprehension, whereas parallel integration models argue that the same regions of the brain are involved in production and comprehension. Therefore, the current state of the art theories on word production and comprehension implicitly disagree on how shared the two behaviours are, and it is worth noting that there is experimental evidence for both types of theories. However, the theories presented are only concerned with words. There are other theories, not specific to word production and comprehension, which model production and comprehension simultaneously, and these may give some insight into how representations could be shared between production and comprehension.

10One of these theories, the production-comprehension integration theory (Pickering & Gambi, 2018; Pickering & Garrod, 2004, 2013, 2014) integrates production and comprehension as would occur during dialogue, one of the most natural forms of language use (Levinson, 2016). The theory argues that words (and sentences) are composed of semantic, lexical, phonological, phonetic, and syntactic representations, and these are a) shared to some degree between production and comprehension, and b) aligned between speakers when in a conversation. In further extensions to this theory, Pickering and Garrod (2014) argue that speaking and listening are really just forms of action production and action perception, and they draw on evidence from the action literature to model the linguistic system. Under these extensions, when a speaker wants to produce a word, they use their production system to go through the processing of the word, while concurrently using their comprehension system to monitor and predict what word they will produce. When a listener is listening to a word, they use their comprehension system to listen, and use their production system to predict the word that is coming in. Thus, speakers and listeners use their production and comprehension systems in flexible, tightly linked ways, when speaking and listening. Following this rationale, calling on the language production and language comprehension systems at the same time, or within the same time-frame, may also be detrimental if both systems are entirely or partly shared (see also Dell & Chang (2014)).

11In sum, theories of word production and word comprehension do not agree on whether the two language behaviours are shared or not, or whether the same time course or brain regions are involved in them both. Evidence supporting these theories has come from both psycholinguistic and neuroimaging experiments, which provide measures of behavioural responses, such as reaction times, and brain imaging data, which has rapid temporal resolution with millisecond accuracy (for MEG and EEG), or excellent spatial resolution (for fMRI), allowing us to understand when and where in the brain different cognitive processes relating to production and comprehension occur. Below, we describe in more detail studies which have tried to directly contrast the two theories to broadly understand the exact extent of shared representations between production and comprehension. However, one of the problems inherent in this area is defining what exactly a ‘shared representation’ is. Before describing the experimental evidence on shared representations, we describe the difficulties in understanding what the phrase means and the criteria we follow for determining if a representation is shared or not.

What do we mean by shared representations?

12Defining something as ‘truly shared’ is extremely difficult. This is partly due to difficulty in mapping brain structure to psycholinguistic structure, partly due to ambiguity in definitions, and partly due to the current limitations in technical methods. Here, we briefly describe each of these issues and define exactly what we mean when we talk about representations being shared.

13Firstly, there is no consensus on how to map structures in the brain (such as neurons or neuronal populations) to psycholinguistic elements (e.g., Krakauer et al., 2017; Mehler et al., 1984; Poeppel, 2012). For example, is one neuron coding for a phoneme? Or are phonemes instantiated by collections of neurons? How exactly is a syllable stored and what should be measured in brain structure to see this? At some level we know that collections of neurons must store information about words, but exactly how this is achieved is unknown, and this leaves scientists with the mapping problem, which is, in the words of Hagoort (2021), how to link the natural kinds of language or psycholinguistic elements of words to natural kinds in the brain. There is probably no one to one mapping between the structure of the brain and levels of word processing and this makes defining a representation difficult. Processes carried out by the brain are likely to depend on intersecting networks at a given time (Hagoort & Indefrey, 2014) such that only the dynamic property of the system is able to shed light on shared representations and processes.

14Secondly, the definition of a representation, or of a process, is somewhat ambiguous and can be defined differently by different researchers. Representations can be seen as the codes with which we store knowledge and information about the world (Davis & Poldrack, 2013). However, even using the word ‘store’ may not be most appropriate, as representations may only be instantiations of information and not stored, per se. Thus, each time a word is used, its representation could be different depending on the situation it is used in. Relatedly, language scientists often talk about representations being acted upon by different processes, such as ‘lexical selection’ (where a lexical unit is selected for further processing). Unfortunately, most of the time scientists study human behavior during a task where there is always a process involved, such that disentangling representations and processes seems impossible. In the following we describe experiments that look at shared neurocognitive substrates of representations and processes. These questions are not easy to answer and are not always well-defined in the literature, leading to some level of ambiguity. Obviously, if we cannot understand exactly what we mean with the word ‘representation’ then we cannot test whether representations are shared.

15At its core, a shared representation or process could be seen as one that is exactly the same between production and comprehension. An extreme neural interpretation of this would be that the same set of neurons fire for the same reason during word production and word perception. At present we do not have the technology to be able to determine this in humans because our methods of recording data are not fine-grained enough. Experimental methods where we record brain data, such as EEG, MEG and fMRI, do not measure the activity of single neurons; in these cases, we are measuring big groups of neuronal populations which fire at the same time, giving rise to a change in electrical output (for EEG/MEG) or blood flow (fMRI). Therefore, current technology is limited by the fact that we cannot actually record whether identical neurons are firing during production and comprehension, except the limited case of depth electrodes in epileptic patients (e.g., Llorens et al., 2011). However, this method of measuring is also thrown into question when we consider the mapping problem, because we don’t know that being able to measure at the level of single neurons would be helpful for determining a representation.

16Despite these limitations, in current research we can still make attempts to see how much of word production and word comprehension are shared, just like in other domains such as music and language (Fedorenko et al., 2009), language and action (Pulvermüller & Fadiga, 2010) or action execution and observation (de Vignemont & Haggard, 2008). Here, we define a representation as anything that we refer to when speaking about a word. Therefore, if we need to talk about the semantics of a word, we call that a semantic representation. In the brain, a representation is instantiated as a population of neurons which fire in a certain pattern giving rise to this information, e.g., neurons firing in a pattern which encode the semantic information of a word. We define word representations as being shared if there is a) overlap in the brain areas used in production and comprehension, b) if the time courses of different levels of processing are similar, or c) if, when doing two tasks together which may share representations, there is interference between the tasks. To address these three sharedness possibilities we discuss the evidence for them below.

Overlap in brain regions and time courses of language processing

17We can test whether or not word representations and processes are shared between production and comprehension by testing when certain linguistic effects arise in processing and whether there are similarities between production and comprehension, as well as testing whether similar brain regions are involved in different processes between the two tasks. As explained in the introduction, there are two major classes of theories which posit different neurocognitive correlates of production and comprehension. Partial separation models argue that processing is sequential, and that some representations in production and comprehension are separate, resulting in different regions of the brain being involved in each language behaviour. Integration models predict that the same word representations and the same brain regions are involved in production and comprehension, and that all aspects of these word representations are active at the same time.

18There is neuroimaging evidence supporting overlap of brain regions between production and comprehension. Okada & Hickok (2006) found the posterior auditory cortex was active both when participants listened to names of pictures and when they covertly named those pictures, suggesting that the posterior auditory cortex is an important area for speech input and output representations. Pulvermüller et al. (2006) found that when participants listened to syllables and produced syllables the same areas of the motor cortex were activated (i.e., the tongue region of the motor cortex was active when the tongue was involved in producing a syllable and when the syllable heard by the participant contained a tongue movement to make that sound), suggesting that word representations contain motor cortex activation which is a meaningful part of the word representation, whether or not the person is actually planning to produce that sound. Similar conclusions about the same regions of the brain being involved in production and comprehension, especially relating to sounds of words, has been found (see Grabski & Sato, 2020; Heinks‐Maldonado et al., 2005; Watkins & Paus, 2004). In a very influential meta-analysis, Price (2012) determined that a large number of regions of the brain were found to be involved in both production and comprehension, such as the mid temporal gyrus being involved in accessing semantic representations across all language behaviours. However, this meta-analysis also posited specific regions which are only active in one language behaviour, such as the posterior inferior temporal gyrus for accessing semantics specific to word production. Thus, together these studies suggest that while there is overlap in brain regions, and hence some overlap in word representations, this is not the case across the board. Systematic investigation of this question is in its infancy, and thus we currently just do not know how much overlap there is between production and comprehension.

19When testing for whether different levels of word representations are active simultaneously, hence whether the time courses of effects are the same, again the evidence is mixed. In an influential meta-analysis, Indefrey (2011) (see also Indefrey & Levelt (2004)) found that in language production, different levels of word representations are active in a sequential manner, with explicit timings given to each stage, such that lexical and semantic information are activated around 200ms after the onset of the task, and phonological information is not present in word production until around 300ms, thus at least 100ms after the semantic information of that word has been activated (e.g., Dubarry et al., 2017; Levelt et al., 1998; Sahin et al., 2009; Salmelin et al., 1994; Van Turennout et al., 1997). Similarly, in comprehension, some studies have found timing differences between the onset of phonological activation, within 200 ms of processing, and lexical and semantic effects, within 300 to 400 ms of processing, after listening to a word (e.g., Dufour et al., 2013; Holcomb & Neville, 1990; Winsler et al., 2018). In contrast, other review papers have found that semantic and phonological effects arise concurrently (and early), with little to no delays in timing between effects at each level (Strijkers & Costa, 2011, 2016). While there may be later differences in timings of levels as a function of task processing, there is initial explosion in the word network in both production and comprehension leading to simultaneous activation of all components of a word. This has been found in production, where 150ms after task onset both semantic and phonological effects were present (e.g., Feng et al., 2021; Miozzo et al., 2015; Riès et al., 2017; Strijkers et al., 2010). For comprehension, again, some studies have found that phonological and lexical/semantic information is activated in parallel after hearing a word (e.g., MacGregor et al., 2012; Pulvermüller et al., 2005; Shtyrov et al., 2014). Altogether, whether the same brain regions are involved in word production and comprehension, or whether levels of representations are active simultaneously or not, are still unanswered questions. We have carried out some recent work investigating these questions, with ongoing work promising to truly shed light on this area. We discuss these studies in greater depth below.

20The first study we discuss tested whether lexical and phonological effects arose at similar times in language production. Strijkers et al. (2017) carried out a MEG study aiming to determine whether different production levels arose in a more sequential or more integrated manner, by investigating the time course of the brain response to two different production variables. Participants named pictures, and these pictures varied by their lexical frequency (i.e., how common they are in language, such as ‘dog’ being a more common word than ‘compass’) and their phonotactic frequency (whether they started with labial sounds, such as ‘b’, or coronal sounds primarily involving the tongue, such as ‘t’). Strijkers et al. found that within 180ms of word processing there were both lexical frequency and phonotactic effects. The effects were found in different brain regions, with the lexical frequency effect being seen in the mid temporal cortex and the left inferior frontal gyrus, and the phonotactic effect being seen in the motor cortex and the posterior superior temporal cortex. An interesting feature of the phonotactic effect was somatotopic activation: lip-related regions were more active for the lip-related sounds, and tongue-related regions were more active for the tongue-related sounds. Importantly, both effects arose simultaneously, which supports parallel activation of word representations and strongly argues against sequential activation of different levels of word representation. Therefore, this study provides evidence supporting integration models, which, as already stated, argue that words are represented in the brain in functional units which are used for both word production and comprehension. While the data here is more in line with integration model predictions, only speech production was tested and hence having a contrast between word production and word comprehension was not done.

21The second study, Fairs et al. (2021), built on Strijkers et al. (2017) by directly comparing the time course of word processing in production and comprehension. In an EEG study, participants named pictures in one task and listened to words in the other task. Importantly, the same words were used as stimuli in both tasks (i.e., participants would name a ‘horse’ in the production task and hear the word ‘horse’ in the comprehension task; see Figure 2). This is critical because this means the experimental manipulations were identical between the two tasks, allowing us to really compare ‘like for like’ when processing. Similarly to Strijkers et al. (2017), the stimuli varied by their lexical frequency and their phonotactic frequency. Fairs et al. found evidence for lexical and phonotactic frequency effects arising simultaneously in early time windows, starting as early as 75ms after stimulus onset, in both production and comprehension. These results again argue for simultaneous activation of all levels of word representations where the entire word representation first ignites and all word information is available at once. The fact that the same time courses were found in production and comprehension suggests that the way words are processed within the two language behaviours is either the same or they are very strongly related. This study is the first which directly compares the time course of processing in both production and comprehension and finds evidence that they are shared.

Figure 2: Schematic representation of the rationale of Fairs et al. (2021)

Figure 2: Schematic representation of the rationale of Fairs et al. (2021)

The figure displays the experimental design with the language production (top panel) and language comprehension (bottom panel) tasks. In production, participants overtly name pictures whereas in comprehension participants listen to spoken words and press a response button when a spoken word belongs to the semantic category “food”. In both tasks the same psycholinguistic manipulations are used (high vs. low word frequency HWF/LWF; high vs. low phonological frequency HPF/LPF). In the middle panel, the predictions of effects at the temporal level are schematically depicted: according to sequential models lexico-semantic information (yellow) should be available well before phonological-phonetic information (green) in language production, and vice versa in language perception. According to parallel models lexico-semantic and phonological-phonetic word knowledge should rapidly activate simultaneously. Taken from Fairs et al. (2021).

22A question arising from Strijkers et al. (2017) and Fairs et al. (2021) is whether shared regions of the brain, rather than only time courses, are involved in production and comprehension, as neither study has been able to provide the answer. In an ongoing study, Fairs et al. (2020) hope to answer this question. This study is an MRI-constrained MEG study, meaning that both the time course of linguistic processing and the brain regions involved can be investigated. Participants name pictures in one task and listen to words in the other (similarly to Fairs et al. (2021)), with again the same stimuli used in both tasks. The stimuli are manipulated along two dimensions: a meaning dimension and a sound dimension. For the meaning manipulation, the stimuli are either animals or tools. For the sound manipulation, the stimuli either begin with sounds produced by the lips (e.g., ‘b’) or with sounds produced by the tongue (e.g., ‘t’). These two manipulations have established cortical dissociations in the brain (Chen et al., 2017; Kemmerer, 2014; Pulvermüller et al., 2006; Strijkers et al., 2017), which simply means that when comparing the names of animal vs. tool items, or lip vs. tongue sounds, different regions of the brain are active specifically for animals, tools, lip areas and tongue areas. This is important as it allows us to make explicit predictions about exactly where we should find processing differences between the stimuli. Neural activation in these areas will be used in two different ways. The first is to determine when processing begins, by measuring when activation occurs in that specific region and marking that as the onset of when that representation is active. For example, in an area specific to animal processing, the time point at which activation is detected in that area (which we call the ‘onset of activation’) is indexed in time as the time when meaning-related processing began. By using this ‘onset of activation’ approach, we can be sure that we accurately determine something close to the true onset of meaning- and sound-related information. Hence, the onset of activation in these areas strongly marks when meaning or sound related information is active. The second way is to test whether the same regions of the brain are involved in production and perception. The animal-, tool-, lip- and tongue-specific regions of the brain are not specific to production or comprehension, which allows us to test whether the same regions will be active (with the same time course) in production and comprehension. This appears to better answer the question of whether representations are shared between production and comprehension, because we are directly testing whether specific brain regions are active when either producing or comprehending the same words.

23Overall, the evidence investigating the time course and brain regions involved in production and comprehension is somewhat mixed, with some studies finding similar time courses and brain regions involved, and other studies finding more separation. Our own work tends to favour an integration account where all levels of a word representation are active simultaneously in both production and comprehension. However, directly comparing facets of production and comprehension is just beginning as a research area and the coming years will provide fruitful research avenues to help us ultimately work out how production and comprehension of words works in the brain.

Simultaneous production and comprehension: Dual task paradigms

24Other studies have approached the question of what is shared between production and comprehension in a different way, by testing whether access to production and comprehension is possible at the same time, and testing whether there is interference between the tasks (e.g., Bürki et al., 2020; Fairs, 2019; Fairs et al., 2018; Fargier & Laganaro, 2016, 2019; Fournet et al., 2021). In everyday life people tend to multitask quite often, such as driving while listening to the radio, or cooking while having a conversation, and we may feel less efficient or slowed down in one or the other activity. To test this, some studies have made use of a dual-task design, where participants carry out two tasks at the same time – one is a production task, and the other a comprehension task, such as picture naming and syllable identification. The logic is that if people carry out a production and comprehension task at the same time but there is a cost in one of the tasks – such as it takes longer to do that task – then likely the processes involved in the two tasks or the mental representations of the words are shared between production and comprehension, and they cannot be easily accessed simultaneously. This can be referred to as “crosstalk” (Bergen et al., 2013; Pashler, 1994), reflecting conflict in using similar neural resources to achieve two tasks that use the same representations.

25In a study by Fargier & Laganaro (2016), participants named pictures presented on screen. 300ms after the pictures were shown, participants also heard a syllable, such as ‘mi’. Therefore, participants heard these syllables while they were retrieving the names of the pictures. Participants always had to name the picture but only one-fifth of the time they also needed to press a button to respond to a given syllable. This was the critical ‘active dual-task’ condition. Participants then did the same experiment again where they always named and listened to the syllables but they never needed to respond to the syllables (hence, they could try to ignore them), called the ‘passive dual-task’ condition. Finally, participants also just named the pictures with no syllables at all, called the ‘single task’ condition. While participants did this experiment, their brain activity was recorded using EEG.

26Fargier and Laganaro (2016) found that participants named the pictures fastest in the single task condition, followed by the passive dual-task condition, and were slowest in the critical active dual-task condition. This suggests some degree of similarity between the production processes involved in naming the picture and the comprehension processes involved in listening to and ‘translating’ the syllable into a linguistic form. The EEG results however were more enlightening and suggested that there was specific interference relating to accessing the phonological representations of the picture name and the syllable. Other linguistic stages, such as accessing semantic and lexical information, were not affected. In other words, as participants need to retrieve the phonological code of the word they want to say, and also need to access the phonological code of the syllable they listen to, accessing the two in a similar time frame resulted in competition for similar neural resources. Similar results of interference when production and comprehension are carried out simultaneously were found to occur at the phonological level (Fargier & Laganaro, 2019).

Figure 3: Dual-task interference in speaking while listening

Figure 3: Dual-task interference in speaking while listening

A: Production latencies in single-task (picture naming), passive dual-task (picture naming while passively hearing syllables) and active dual-task (picture naming while actively listening to syllables) are shown. B: Neural correlates of the single task (top panel), passive dual-task (middle panel) and active dual-task (bottom panel) corresponding to the grand-average event-related potentials. Neural events correspond to color-coded stable electrophysiological configurations at the scalp. The corresponding topographic patterns are displayed (lowest panel). Interference on late processes is reflected by the increased duration of the topographic pattern numbered 12 (in pink) and is illustrated by dotted lines. Adapted from Fargier & Laganaro (2016).

27Using a similar design, Fairs, Bögels and Meyer (unpublished; see Fairs (2019)) also tested interference between two linguistic tasks. In their study, participants were presented with pictures to name. Either 50 ms, 300 ms, or 1800 ms after the picture was presented, participants heard one of two syllables ‘aak’ or ‘eek’ and needed to press a button corresponding to each syllable. Participants were told to always name the picture before pressing the button to respond to the syllables. In a separate condition, the syllables were replaced with tones. Participants still named the pictures but would hear concurrent tones at different time points, and pressed buttons corresponding to the tones. The authors found that in general, people were slower to name the pictures when they needed to concurrently classify the syllables compared to naming the pictures when concurrently classifying the tones. This was true if the picture and the syllable overlapped, but the more overlap the longer it took people to name. For instance, if the syllable was presented almost simultaneously with the picture (at 50 ms), it took longer to name the picture than when the syllable was presented just after the picture was presented (at 300 ms), and naming when listening to syllables always took longer than naming when listening to tones. This complex pattern of results suggests that there is a dual-task cost to producing a word in general, because this study found that it took longer to name in the tone condition compared to when naming without any sound being presented. However, there is an additional linguistic interference effect on top of this, as it took longer to name pictures when processing syllables (linguistic stimuli) than when processing tones (non-linguistic stimuli). The more overlap between the production and comprehension task, the slower people were to name, which suggests that it may not only be phonological-level access which is challenging when accessed jointly between production and comprehension.

28The studies presented above investigated how specific levels of production and comprehension may overlap, by using precise stimulus onset asynchronies (SOAs) and manipulations. However, a related study by Fairs, Alday, Meyer and Hervais-Adelman (unpublished; see Fairs (2019)) instead analysed the entire time course of overlap between a production and comprehension task, without a specific manipulation. In this MEG study, participants carried out two linguistic tasks at the same time – picture naming as the production task and syllable identification as the comprehension task, as well as each task individually (with no dual-tasking) – while their brain activity was recorded. Using a cutting-edge analysis technique called time-resolved multivariate pattern analysis (MVPA), where the brain signal is decoded into different stable cognitive states, the authors were able to observe the time course of production and comprehension processing, when each task was carried out alone and when they were carried out together in a dual-task. The study found that the time course of processing in syllable identification was very similar when syllable identification was done ‘by itself’ and when carried out in tandem with picture naming. In contrast, picture naming, the production task, was strongly affected when carried out at the same time as syllable identification compared to when carried out alone (see Figure 3).

Figure 3. MVPA classification results of the picture naming task

Figure 3. MVPA classification results of the picture naming task

A series of classifiers were trained and tested on the picture naming task, when carried out alone and when carried out concurrently with syllable identification (picture naming was always Task 2). Only significantly above average results are plotted. These results are a training/testing matrix, where each point represents a time point during the trial where the classifier was trained and then tested on different data, and overall demonstrates a temporal cognitive profile of picture naming. Picture naming when carried out alone is plotted in green, when carried out in the dual-task is plotted in purple, and overlap in results between the two types of picture naming is plotted in orange. We see that when picture naming is carried out alone it follows a fairly uniform distribution around the diagonal. However, when carried out in a dual-task, there are strong differences. Up to 500 ms the cognitive states are similar between picture naming alone and picture naming in a dual-task. After 500 ms, the cognitive states in picture naming are delayed (they start later in time, closer to the diagonal) and extended (they take almost twice as long). The dotted line is plotted at 459 ms, which was the difference in production latency between picture naming alone (859 ms) and picture naming in the dual-task (1318 ms). Adapted from Fairs (2019).

29In the production task, early processes were not affected by dual-tasking, but later processes were extended in time (i.e., they took longer), compared to when picture naming was carried out alone. In addition, there was a delay in some production processes, as if they had been put ‘on hold’, and this delay of 459ms correlated with the difference between naming a picture by itself and naming a picture during a dual task, suggesting that the delay in cognitive processes had a causal effect on taking longer to speak. While this study cannot tell us exactly which processes or representations interfered with one another, they provide evidence that MVPA has the power to track how cognitive states change when the task changes. The assumption here is that when a specific representation or process is active in the brain, the cognitive state of the brain is stable (King & Dehaene, 2014). If that stability is affected then the conclusion is that the task has affected a cognitive process; in this case, the dual-tasking scenario has affected the different processes involved in picture naming and made them take longer.

30Based on the studies discussed, it appears that there are shared levels of linguistic processing between production and comprehension which can result in interference when trying to process words to produce and comprehend concurrently. There is a fair body of work now suggesting that there is interference in accessing phonological representations concurrently for production and comprehension. Although convincing, it may not be enough to clearly state that representations per se are shared, and it may thus be wiser to not tease apart processes and representations. However, the MEG study by Fairs et al. suggests that in fact the majority of linguistic representations are shared between production and comprehension, and what may be important in determining how production and comprehension cope with overlap in representations is the timing of the two processes. This means that having shared representations in production and comprehension is not detrimental by itself for the language system, but needing parallel access to representational levels can be detrimental. Indeed, some studies have found evidence suggesting that the linguistic system may be able to flexibly adjust how production and comprehension are carried out in tandem to reduce interference between shared processing (Fairs et al., 2018; Paucke et al., 2015), meaning that even if word representations are shared between production and comprehension, there are ways to flexibly manage access to representations such that carrying out production and comprehension jointly, as in conversation, is smooth.

Perspectives on language in interaction: how much is shared?

31The research reviewed above suggests that there is a large degree of similarity between linguistic representations and processes in production and comprehension, as we find positive evidence of overlapping brain regions, similar time courses of linguistic processing, and behavioural and neural evidence for interference when production and comprehension are carried out in tandem. Note that language is not just people saying individual words or hearing isolated words out of context, but people being engaged in conversation with one another. How do we know when to take turns so that it prevents our speech planning from being interfered by someone else’s production? Do speakers and listeners really share linguistic representations so that they can truly understand each other? What mechanisms do we use to land on shared meaning? These seem important questions to address in order to determine how much is shared between production and comprehension processes when considered in interaction. In the next paragraphs we briefly highlight the growing research on turn-taking, joint language processes, and linguistic alignment, and finally suggest one paradigm that could be useful to address these issues.

32Conversations feel natural for us, but what exactly do they entail? One crucial aspect of conversation is to understand when it is our turn to talk. Conversational turn-taking is in fact a naturalistic example of the dual-tasking discussed earlier (Levinson, 2016), where transitions between turns suggest that speakers begin to plan their utterances while listening to their interlocutor. Indeed, some studies have demonstrated that speakers start planning their response as soon as the incoming turn’s message can be understood, and they look for cues to turn-completion (e.g., Barthel et al., 2017; Bögels, 2019; Bögels et al., 2018; Sjerps & Meyer, 2015), which likely prevent speech comprehension during speech planning and vice versa (see Levinson & Torreira, 2015). Turn-taking is definitely a social interaction skill, thus the recent studies that are relevant for conversation focus on joint actions and more work is needed to fully understand its underlying cognitive and neural mechanisms (e.g., Clark, 1996; Kuhlen & Abdel Rahman, 2017, 2021).

33In fact, just like many human social behaviours, conversation is a joint language process and researchers have urged scientists to go from single-individual/single-brain studies to multi-brain studies (Hari et al., 2015; Hasson et al., 2012; Wheatley et al., 2019). Some studies have already focused on the coupling of the brains of two people speaking to each other. For example, Silbert et al. (2014) tested whether there would be overlap in the brains of different people if they were telling a story versus listening to that story. They found a huge amount of overlap between speakers and listeners’ brains, such that linguistic and non-linguistic areas of the brain were coupled between speaker and listener (hence between production and comprehension). Beyond showing additional evidence for shared neural substrates of production and comprehension, this suggests that representations are shared between people, but the actual extent to which representations are shared across individuals remains unclear.

34Conversations can be seen as face-to-face oral interactions between two or more speakers, who develop shared assumptions and expectations in order to interpret each other’s utterances and grasp intended messages (Pickering & Garrod, 2004, 2006, 2013, 2014). Hence, prediction processes and linguistic alignment are required. According to Garrod & Pickering (2004), alignment is the process by which two people align their representations, and concerns any linguistic level including phonological, lexical, syntactic and semantic levels. One way to empirically test alignment with neuroscientific tools is to look at inter-subject correlation of neural signals (see Hasson et al., 2012; Menenti et al., 2012). The rationale is that if participants are encoding and decoding the same message then the brain activity of participants over the course of processing stimuli or even dialogue should be highly correlated. Although this technique has mainly been applied to information obtained with fMRI (and therefore on the spatial localization of brain activity), it will be strongly informative when applied to temporally-varying information (see Nastase et al., 2020). When do speakers and listeners activate the same linguistic representation? Reaching linguistic alignment is probably constrained or facilitated by prediction processes. Prediction is the process by which one individual predicts what the other speaker is going to say next (Pickering & Garrod, 2013). An influential proposal has been that prediction uses the production system, in other words, that we use our own set of internal representations and simulations to predict what the other speaker will say. In a recent study, Hadley et al. (2020) tested this theory of prediction-by-simulation by having participants record a series of questions in two sessions and then several months later listen to their own speech or that of similar or dissimilar participants. The task was to predict the end of the questions either by pressing a button or by producing a spoken response. The results indicated that participants’ responses were quicker for productions that were spoken either by themselves or by similar participants, as compared to productions that were spoken by stylistically dissimilar participants. The authors argued that listeners are better at predicting speakers that are similar to themselves, yet much more knowledge is needed on how individual differences in internal representations affect predictive processes during interaction.

35One remaining problem in this field of study is the paradigms used. It is still difficult to have both a naturalistic ecological paradigm of conversation and a good level of experimental control on the production/comprehension link. With the aim to going beyond classical picture naming paradigms to study production and approximate the link between comprehension and production, Fargier & Laganaro (2017) and Fargier, Montant & Strijkers (in prep.) used inferential naming tasks. In these paradigms, a participant needs to understand a definition in order to retrieve the target word from long-term memory and produce it. This implies predicting upcoming information based on contextual information as the definition unfolds in order to constrain target word retrieval, just like in conversation one would infer the end of the speaker’s message. In the most recent study (Fargier, et al., in prep), we showed that participants are able to use definitions to retrieve concrete and abstract words from memory, but production is more accurate and faster for concrete words than for abstract words, in particular if words lack grounding in the external world upon which we rely (Fargier et al., in prep.). One difficulty is that people may agree more with one another when defining concrete words like “bottle” compared to abstract words like “democracy” (Borghi et al., 2017). A further step would then be to study how much individual differences and common ground actually constrain conversation, something that can be approximated, yet controlled, with inferential naming tasks.

Conclusion

36In this article we have reviewed evidence from psycholinguistic and neuroimaging studies of shared representations and processes between production and comprehension. We also pointed to some issues that still need to be addressed to characterize and model the linguistic, cognitive and neural mechanisms that allow conversational situations. We particularly highlight future lines of research, including developing new paradigms that allow for some experimental control while increasing ecological validity, tackling the social part of conversation, and including individual differences that may constrain shared representations and shared assumptions.

Haut de page

Bibliographie

Barthel, M., Meyer, A. S., & Levinson, S. C. (2017) Next Speakers Plan Their Turn Early and Speak after Turn-Final “Go-Signals”, Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.00393

Bergen, B., Medeiros-Ward, N., Wheeler, K., Drews, F., & Strayer, D. (2013) The crosstalk hypothesis: Why language interferes with driving, Journal of Experimental Psychology: General, 142(1), 119–130. https://doi.org/10.1037/a0028428

Bögels, S. (2019) Neural correlates of turn-taking in the wild: Response planning starts early in free interviews [Preprint], PsyArXiv. https://doi.org/10.31234/osf.io/duq4t

Bögels, S., Casillas, M., & Levinson, S. C. (2018) Planning versus comprehension in turn-taking: Fast responders show reduced anticipatory processing of the question, Neuropsychologia, 109, 295–310. https://doi.org/10.1016/j.neuropsychologia.2017.12.028

Borghi, A. M., Binkofski, F., Castelfranchi, C., Cimatti, F., Scorolli, C., & Tummolini, L. (2017) The challenge of abstract concepts, Psychological Bulletin, 143(3), 263.

Braitenberg, V. (1978) Cell assemblies in the cerebral cortex, In Theoretical approaches to complex systems, Springer, 171–188

Bürki, A., Cheneval, P. P., & Laganaro, M. (2015) Do speakers have access to a mental syllabary? ERP comparison of high frequency and novel syllable production, Brain and Language, 150, 90–102. https://doi.org/10.1016/j.bandl.2015.08.006

Bürki, A., Elbuy, S., Madec, S., & Vasishth, S. (2020) What did we learn from forty years of research on semantic interference? A Bayesian meta-analysis, Journal of Memory and Language, 114, 104125. https://doi.org/10.1016/j.jml.2020.104125

Chen, L., Lambon Ralph, M. A., & Rogers, T. T. (2017) A unified model of human semantic knowledge and its disorders, Nature Human Behaviour, 1(3). https://doi.org/10.1038/s41562-016-0039

Clark, H. H. (1996) Using language, Cambridge university press.

Davis, T., & Poldrack, R. A. (2013) Measuring neural representations with fMRI: practices and pitfalls, Annals of the New York Academy of Sciences, 1296(1), 108–134.

de Vignemont, F., & Haggard, P. (2008) Action observation and execution: What is shared?, Social Neuroscience, 3(3–4), 421–433. https://doi.org/10.1080/17470910802045109

de Zubicaray, G., & Piai, V. (2019) Investigating the spatial and temporal components of speech production.

Dell, G. S. (1984) Representation of Serial Order in Speech: Evidence From the Repeated Phoneme Effect in Speech Errors, Journal of Experimental Psychology: Learning, Memory, and Cognition, 10(2), 222–233.

Dell, G. S., & Chang, F. (2014) The P-chain: Relating sentence production and its disorders to comprehension and acquisition, Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1634), 20120394. https://doi.org/10.1098/rstb.2012.0394

Dubarry, A.-S., Llorens, A., Trébuchon, A., Carron, R., Liégeois-Chauvel, C., Bénar, C.‑G., & Alario, F.‑X. (2017) Estimating Parallel Processing in a Language Task Using Single-Trial Intracerebral Electroencephalography, Psychological Science, 28(4), 414–426. https://doi.org/10.1177/0956797616681296

Dufour, S., Brunellière, A., & Frauenfelder, U. H. (2013) Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition With Event-Related Potentials, Cognitive Science, 37(3), 489–507. https://doi.org/10.1111/cogs.12015

Fairs, A. (2019) Linguistic dual-tasking: Understanding temporal overlap between production and comprehension [PhD Thesis]. [Sl: sn].

Fairs, A., Bögels, S., & Meyer, A. S. (2018) Dual-tasking with simple linguistic tasks: Evidence for serial processing, Acta Psychologica, 191, 131–148.

Fairs, A., Dmitrieva, X., Chanoine, V., Morillon, B., Michelas, A., Dufour, S., Pulvermüller, F., & Strijkers, K. (2020) Does the brain recruit the same word representations across language language production and perception? A Registered Report MEG study, Cortex (Stage 1 Acceptance). https://osf.io/yaqdp/

Fairs, A., Michelas, A., Dufour, S., & Strijkers, K. (2021) The Same Ultra-Rapid Parallel Brain Dynamics Underpin the Production and Perception of Speech, Cerebral Cortex Communications, 2(3). https://doi.org/10.1093/texcom/tgab040

Fargier, R., & Laganaro, M. (2016) Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning, PLOS ONE, 11(12), e0168358. https://doi.org/10.1371/journal.pone.0168358

Fargier, R., & Laganaro, M. (2017) Spatio-temporal Dynamics of Referential and Inferential Naming: Different Brain and Cognitive Operations to Lexical Selection, Brain Topography, 30(2), 182–197. https://doi.org/10.1007/s10548-016-0504-4

Fargier, R., & Laganaro, M. (2019) Interference in speaking while hearing and vice versa, Scientific Reports, 9(1), 5375. https://doi.org/10.1038/s41598-019-41752-7

Fargier, R., Montant, M., & Strijkers, K. (n.d.) The production of abstract words with inferential naming tasks.

Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009) Structural integration in language and music: Evidence for a shared system, Memory & Cognition, 37(1), 1–9.

Feng, C., Damian, M. F., & Qu, Q. (2021) Parallel Processing of Semantics and Phonology in Spoken Production: Evidence from Blocked Cyclic Picture Naming and EEG, Journal of Cognitive Neuroscience, 1–14. https://doi.org/10.1162/jocn_a_01675

Fournet, M., Pernon, M., Catalano Chiuvé, S., Lopez, U., & Laganaro, M. (2021) Attention in post-lexical processes of utterance production: Dual-task cost in younger and older adults, Quarterly Journal of Experimental Psychology, 17470218211034130.

Garrod, S., & Pickering, M. J. (2004) Why is conversation so easy?, Trends in Cognitive Sciences, 8(1), 8–11. https://doi.org/10.1016/j.tics.2003.10.016

Grabski, K., & Sato, M. (2020) Adaptive phonemic coding in the listening and speaking brain, Neuropsychologia, 136, 107267. https://doi.org/10.1016/j.neuropsychologia.2019.107267

Hadley, L. V., Fisher, N. K., & Pickering, M. J. (2020) Listeners are better at predicting speakers similar to themselves, Acta Psychologica, 208, 103094.

Hagoort, P. (2021) Carving the neurobiology of language at its joints: The quest for natural kinds. Distinguished career award lecture, Annual conference of the Society of Neurobiology of Language.

Hagoort, P., & Indefrey, P. (2014) The Neurobiology of Language Beyond Single Words, Annual Review of Neuroscience, 37(1), 347–362. https://doi.org/10.1146/annurev-neuro-071013-013847

Hamilton, L. S., Oganian, Y., Hall, J., & Chang, E. F. (2021) Parallel and distributed encoding of speech across human auditory cortex, Cell, 184(18), 4626-4639.e13. https://doi.org/10.1016/j.cell.2021.07.019

Hari, R., Henriksson, L., Malinen, S., & Parkkonen, L. (2015) Centrality of Social Interaction in Human Brain Function, Neuron, 88(1), 181–193. https://doi.org/10.1016/j.neuron.2015.09.022

Hasson, U., Ghazanfar, A. A., Galantucci, B., Garrod, S., & Keysers, C. (2012) Brain-to-brain coupling: A mechanism for creating and sharing a social world, Trends in Cognitive Sciences, 16(2), 114–121. https://doi.org/10.1016/j.tics.2011.12.007

Hauk, O. (2016) Only time will tell – why temporal information is essential for our neuroscientific understanding of semantics, Psychonomic Bulletin & Review, 23(4), 1072–1079. https://doi.org/10.3758/s13423-015-0873-9

Hebb, D. O. (1949) The organisation of behaviour: A neuropsychological theory. Science Editions New York.

Heinks‐Maldonado, T. H., Mathalon, D. H., Gray, M., & Ford, J. M. (2005) Fine-tuning of auditory cortex during speech production, Psychophysiology, 42(2), 180–190. https://doi.org/10.1111/j.1469-8986.2005.00272.x

Hickok, G., & Poeppel, D. (2004) Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language, Cognition, 92(1–2), 67–99. https://doi.org/10.1016/j.cognition.2003.10.011

Hickok, G., & Poeppel, D. (2007) The cortical organization of speech processing, Nature Reviews Neuroscience, 8(5), 393–402. https://doi.org/10.1038/nrn2113

Holcomb, P. J., & Neville, H. J. (1990) Auditory and Visual Semantic Priming in Lexical Decision: A Comparison Using Event-related Brain Potentials, Language and Cognitive Processes, 5(4), 281–312. https://doi.org/10.1080/01690969008407065

Indefrey, P. (2011) The Spatial and Temporal Signatures of Word Production Components: A Critical Update, Frontiers in Psychology, 2. https://doi.org/10.3389/fpsyg.2011.00255

Indefrey, P., & Levelt, W. J. M. (2004) The spatial and temporal signatures of word production components, Cognition, 92(1–2), 101–144. https://doi.org/10.1016/j.cognition.2002.06.001

Kemmerer, D. (2014) Word classes in the brain: Implications of linguistic typology for cognitive neuroscience, Cortex, 58, 27–51. https://doi.org/10.1016/j.cortex.2014.05.004

King, J.-R., & Dehaene, S. (2014) Characterizing the dynamics of mental representations: The temporal generalization method, Trends in Cognitive Sciences, 18(4), 203–210. https://doi.org/10.1016/j.tics.2014.01.002

Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A., & Poeppel, D. (2017). Neuroscience needs behavior: Correcting a reductionist bias, Neuron, 93(3), 480–490.

Kuhlen, A. K., & Abdel Rahman, R. (2017) Having a task partner affects lexical retrieval: Spoken word production in shared task settings, Cognition, 166, 94–106. https://doi.org/10.1016/j.cognition.2017.05.024

Kuhlen, A. K., & Abdel Rahman, R. (2021) Joint language production: An electrophysiological investigation of simulated lexical access on behalf of a task partner, Journal of Experimental Psychology: Learning, Memory, and Cognition.

Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998) An MEG Study of Picture Naming, Journal of Cognitive Neuroscience, 10(5), 553–567. https://doi.org/10.1162/089892998562960

Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999) A theory of lexical access in speech production, Behavioral and Brain Sciences, 22, 1–75.

Levelt, W. J. M., & Wheeldon, L. (1994) Do speakers have access to a mental syllabary?, Cognition, 50(1–3), 239–269. https://doi.org/10.1016/0010-0277(94)90030-2

Levinson, S. C. (2016) Turn-taking in Human Communication – Origins and Implications for Language Processing, Trends in Cognitive Sciences, 20(1), 6–14. https://doi.org/10.1016/j.tics.2015.10.010

Levinson, S. C., & Torreira, F. (2015) Timing in turn-taking and its implications for processing models of language, Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00731

Llorens, A., Trébuchon, A., Liégeois-Chauvel, C., & Alario, F.-X. (2011) Intra-Cranial Recordings of Brain Activity During Language Production, Frontiers in Psychology, 2. https://doi.org/10.3389/fpsyg.2011.00375

MacGregor, L. J., Pulvermüller, F., van Casteren, M., & Shtyrov, Y. (2012) Ultra-rapid access to words in the brain, Nature Communications, 3(1). https://doi.org/10.1038/ncomms1715

Mehler, J., Morton, J., & Jusczyk, P. W. (1984) On reducing language to biology, Cognitive Neuropsychology, 1(1), 83–116.

Menenti, L., Pickering, M. J., & Garrod, S. C. (2012) Toward a neural basis of interactive alignment in conversation, Frontiers in Human Neuroscience, 6. https://doi.org/10.3389/fnhum.2012.00185

Meyer, A. S., Huettig, F., & Levelt, W. J. M. (2016) Same, different, or closely related: What is the relationship between language production and comprehension?, Journal of Memory and Language, 89, 1–7. https://doi.org/10.1016/j.jml.2016.03.002

Miozzo, M., Pulvermüller, F., & Hauk, O. (2015) Early Parallel Activation of Semantics and Phonology in Picture Naming: Evidence from a Multiple Linear Regression MEG Study, Cerebral Cortex, 25(10), 3343–3355. https://doi.org/10.1093/cercor/bhu137

Nastase, S. A., Goldstein, A., & Hasson, U. (2020) Keep it real: Rethinking the primacy of experimental control in cognitive neuroscience [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/whn6d

Okada, K., & Hickok, G. (2006) Left posterior auditory-related cortices participate both in speech perception and speech production: Neural overlap revealed by fMRI, Brain and Language, 98(1), 112–117. https://doi.org/10.1016/j.bandl.2006.04.006

Pashler, H. (1994) Dual-Task Interference in Simple Tasks: Data and Theory, Psychological Bulletin, 116(2), 220–244.

Paucke, M., Oppermann, F., Koch, I., & Jescheniak, J. D. (2015) On the costs of parallel processing in dual-task performance: The case of lexical processing in word production, Journal of Experimental Psychology: Human Perception and Performance, 41(6), 1539–1552. https://doi.org/10.1037/a0039583

Pickering, M. J., & Gambi, C. (2018) Predicting while comprehending language: A theory and review, Psychological Bulletin, 144(10), 1002–1044. https://doi.org/10.1037/bul0000158

Pickering, M. J., & Garrod, S. (2004) Toward a mechanistic psychology of dialogue, Behavioral and Brain Sciences, 27(2), 169–190. https://doi.org/10.1017/S0140525X04000056

Pickering, M. J., & Garrod, S. (2006) Alignment as the Basis for Successful Communication, Research on Language and Computation, 4(2–3), 203–228. https://doi.org/10.1007/s11168-006-9004-0

Pickering, M. J., & Garrod, S. (2013) An integrated theory of language production and comprehension, Behavioral and Brain Sciences, 36(4), 329–347. https://doi.org/10.1017/S0140525X12001495

Pickering, M. J., & Garrod, S. (2014) Neural integration of language production and comprehension, Proceedings of the National Academy of Sciences, 111(43), 15291–15292. https://doi.org/10.1073/pnas.1417917111

Poeppel, D. (2012) The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language, Cognitive Neuropsychology, 29(1–2), 34–55.

Price, C. J. (2012) A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading, NeuroImage, 62(2), 816–847. https://doi.org/10.1016/j.neuroimage.2012.04.062

Pulvermuller, F. (1999) Words in the brain’s language, Behavioral and Brain Sciences, 22, 253–279.

Pulvermüller, F. (2018) Neural reuse of action perception circuits for language, concepts and communication., Progress in Neurobiology, 160, 1–44. https://doi.org/10.1016/j.pneurobio.2017.07.001

Pulvermüller, F., & Fadiga, L. (2010) Active perception: Sensorimotor circuits as a cortical basis for language, Nature Reviews Neuroscience, 11(5), 351–360. https://doi.org/10.1038/nrn2811

Pulvermüller, F., Shtyrov, Y., & Ilmoniemi, R. (2005) Brain signatures of meaning access in action word recognition, Journal of Cognitive Neuroscience, 17(6), 884–892.

Pulvermüller, F., Shtyrov, Y., Ilmoniemi, R. J., & Marslen-Wilson, W. D. (2006) Tracking speech comprehension in space and time, NeuroImage, 31(3), 1297–1305. https://doi.org/10.1016/j.neuroimage.2006.01.030

Riès, S. K., Dhillon, R. K., Clarke, A., King-Stephens, D., Laxer, K. D., Weber, P. B., Kuperman, R. A., Auguste, K. I., Brunner, P., Schalk, G., & others. (2017) Spatiotemporal dynamics of word retrieval in speech production revealed by cortical high-frequency band activity, Proceedings of the National Academy of Sciences, 114(23), E4530–E4538.

Sahin, N. T., Pinker, S., Cash, S. S., Schomer, D., & Halgren, E. (2009) Sequential Processing of Lexical, Grammatical, and Phonological Information Within Broca’s Area, Science, 326(5951), 445–449. https://doi.org/10.1126/science.1174481

Salmelin, R., Hari, R., Lounasmaa, O. V., & Sams, M. (1994) Dynamics of brain activation during picture naming, Nature, 368(6470), 463–465. https://doi.org/10.1038/368463a0

Shtyrov, Y., Butorina, A., Nikolaeva, A., & Stroganova, T. (2014) Automatic ultrarapid activation and inhibition of cortical motor systems in spoken word comprehension, Proceedings of the National Academy of Sciences, 111(18), E1918–E1923. https://doi.org/10.1073/pnas.1323158111

Silbert, L. J., Honey, C. J., Simony, E., Poeppel, D., & Hasson, U. (2014) Coupled neural systems underlie the production and comprehension of naturalistic narrative speech, Proceedings of the National Academy of Sciences, 111(43), E4687–E4696. https://doi.org/10.1073/pnas.1323812111

Sjerps, M. J., & Meyer, A. S. (2015) Variation in dual-task performance reveals late initiation of speech planning in turn-taking, Cognition, 136, 304–324. https://doi.org/10.1016/j.cognition.2014.10.008

Strijkers, K. (2016a) Can hierarchical models display parallel cortical dynamics? A non-hierarchical alternative of brain language theory, Language, Cognition and Neuroscience, 31(4), 465–469. https://doi.org/10.1080/23273798.2015.1096403

Strijkers, K. (2016b) A Neural Assembly-Based View on Word Production: The Bilingual Test Case: Neural Assemblies in Bilingualism, Language Learning, 66(S2), 92–131. https://doi.org/10.1111/lang.12191

Strijkers, K., & Costa, A. (2011) Riding the Lexical Speedway: A Critical Review on the Time Course of Lexical Selection in Speech Production, Frontiers in Psychology, 2. https://doi.org/10.3389/fpsyg.2011.00356

Strijkers, K., & Costa, A. (2016) The cortical dynamics of speaking: Present shortcomings and future avenues, Language, Cognition and Neuroscience, 31(4), 484–503. https://doi.org/10.1080/23273798.2015.1120878

Strijkers, K., Costa, A., & Pulvermüller, F. (2017) The cortical dynamics of speaking: Lexical and phonological knowledge simultaneously recruit the frontal and temporal cortex within 200 ms, NeuroImage, 163, 206–219. https://doi.org/10.1016/j.neuroimage.2017.09.041

Strijkers, K., Costa, A., & Thierry, G. (2010) Tracking Lexical Access in Speech Production: Electrophysiological Correlates of Word Frequency and Cognate Effects, Cerebral Cortex, 20(4), 912–928. https://doi.org/10.1093/cercor/bhp153

Tremblay, P., & Dick, A. S. (2016) Broca and Wernicke are dead, or moving past the classic model of language neurobiology, Brain and Language, 162, 60–71. https://doi.org/10.1016/j.bandl.2016.08.004

Van Turennout, M., Hagoort, P., & Brown, C. M. (1997) Electrophysiological evidence on the time course of semantic and phonological processes in speech production, Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(4), 787.

Watkins, K., & Paus, T. (2004) Modulation of Motor Excitability during Speech Perception: The Role of Broca’s Area, Journal of Cognitive Neuroscience, 16(6), 978–987. https://doi.org/10.1162/0898929041502616

Wheatley, T., Boncz, A., Toni, I., & Stolk, A. (2019) Beyond the Isolated Brain: The Promise and Challenge of Interacting Minds, Neuron, 103(2), 186–188. https://doi.org/10.1016/j.neuron.2019.05.009

Winsler, K., Midgley, K. J., Grainger, J., & Holcomb, P. J. (2018) An electrophysiological megastudy of spoken word recognition, Language, Cognition and Neuroscience, 33(8), 1063–1082. https://doi.org/10.1080/23273798.2018.1455985

Haut de page

Table des illustrations

Titre Figure 1: A unified view of comprehension and production processes
Légende The figure illustrates similarities and differences between production processes (left) and comprehension processes (right). Similar stages are required in both language processes and include semantic (blue square), lexical (pink square), phonological (green square) and monitoring (orange square). Production and comprehension differ in that a soundwave is the output of articulation in production (left panel) and the input in comprehension. The flow of information is assumed to be reversed in production and comprehension, with the main flow of information going from semantic to phonological in production and from phonological to semantic in comprehension (dark blue arrows). Cascading and parallel processes are indicated by concurrent light blue arrows. Middle panel: Theoretical models make different assumptions (e.g. separation vs. integration) about the degree of overlap in processes and representations involved in production and comprehension. Different psycholinguistic and neural predictions can be formulated to characterize shared vs. different word representations and processes.
URL http://journals.openedition.org/tipa/docannexe/image/4879/img-1.jpg
Fichier image/jpeg, 134k
Titre Figure 2: Schematic representation of the rationale of Fairs et al. (2021)
Légende The figure displays the experimental design with the language production (top panel) and language comprehension (bottom panel) tasks. In production, participants overtly name pictures whereas in comprehension participants listen to spoken words and press a response button when a spoken word belongs to the semantic category “food”. In both tasks the same psycholinguistic manipulations are used (high vs. low word frequency HWF/LWF; high vs. low phonological frequency HPF/LPF). In the middle panel, the predictions of effects at the temporal level are schematically depicted: according to sequential models lexico-semantic information (yellow) should be available well before phonological-phonetic information (green) in language production, and vice versa in language perception. According to parallel models lexico-semantic and phonological-phonetic word knowledge should rapidly activate simultaneously. Taken from Fairs et al. (2021).
URL http://journals.openedition.org/tipa/docannexe/image/4879/img-2.jpg
Fichier image/jpeg, 189k
Titre Figure 3: Dual-task interference in speaking while listening
Légende A: Production latencies in single-task (picture naming), passive dual-task (picture naming while passively hearing syllables) and active dual-task (picture naming while actively listening to syllables) are shown. B: Neural correlates of the single task (top panel), passive dual-task (middle panel) and active dual-task (bottom panel) corresponding to the grand-average event-related potentials. Neural events correspond to color-coded stable electrophysiological configurations at the scalp. The corresponding topographic patterns are displayed (lowest panel). Interference on late processes is reflected by the increased duration of the topographic pattern numbered 12 (in pink) and is illustrated by dotted lines. Adapted from Fargier & Laganaro (2016).
URL http://journals.openedition.org/tipa/docannexe/image/4879/img-3.png
Fichier image/png, 336k
Titre Figure 3. MVPA classification results of the picture naming task
Légende A series of classifiers were trained and tested on the picture naming task, when carried out alone and when carried out concurrently with syllable identification (picture naming was always Task 2). Only significantly above average results are plotted. These results are a training/testing matrix, where each point represents a time point during the trial where the classifier was trained and then tested on different data, and overall demonstrates a temporal cognitive profile of picture naming. Picture naming when carried out alone is plotted in green, when carried out in the dual-task is plotted in purple, and overlap in results between the two types of picture naming is plotted in orange. We see that when picture naming is carried out alone it follows a fairly uniform distribution around the diagonal. However, when carried out in a dual-task, there are strong differences. Up to 500 ms the cognitive states are similar between picture naming alone and picture naming in a dual-task. After 500 ms, the cognitive states in picture naming are delayed (they start later in time, closer to the diagonal) and extended (they take almost twice as long). The dotted line is plotted at 459 ms, which was the difference in production latency between picture naming alone (859 ms) and picture naming in the dual-task (1318 ms). Adapted from Fairs (2019).
URL http://journals.openedition.org/tipa/docannexe/image/4879/img-4.png
Fichier image/png, 298k
Haut de page

Pour citer cet article

Référence électronique

Amie Fairs, Raphaël Fargier et Kristof Strijkers, « Shared or different: How linked are word production and comprehension? »TIPA. Travaux interdisciplinaires sur la parole et le langage [En ligne], 38 | 2022, mis en ligne le 27 janvier 2023, consulté le 11 novembre 2025. URL : http://journals.openedition.org/tipa/4879 ; DOI : https://doi.org/10.4000/tipa.4879

Haut de page

Auteurs

Amie Fairs

Aix-Marseille University & CNRS, LPL, 13100 Aix-en-Provence, France
amiefairs@gmail.com

Raphaël Fargier

Aix-Marseille University & CNRS, LPL, 13100 Aix-en-Provence, France
Department of Special Needs Education, University of Oslo, Norway
rfargier1@gmail.com

Kristof Strijkers

Aix-Marseille University & CNRS, LPL, 13100 Aix-en-Provence, France
Kristof.strijkers@univ-amu.fr

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont susceptibles d’être soumis à des autorisations d’usage spécifiques.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search