Navigation – Plan du site

AccueilNuméros16-2VariaWhat happens to semantics and int...

Varia

What happens to semantics and interaction in a software’s automatic transcriptions?

A praxeological gaze on speech-to-text
Qu’advient-il de la sémantique et de l’interaction dans les transcriptions automatiques d’un logiciel ? Regard praxéologique sur le speech-to-text
¿Qué sucede con la semántica y la interacción en las transcripciones automáticas de un software? Una mirada praxeológica al speech-to-text
Was geschieht mit Semantik und Interaktion in automatischen Transkriptionen einer Software? Ein praxeologischer Blick auf Speech-to-text
Marine Kneubühler
Cet article est une traduction de :
Qu’advient-il de la sémantique et de l’interaction dans les transcriptions automatiques d’un logiciel ? [fr]
Autre(s) traduction(s) de cet article :
Was geschieht mit Semantik und Interaktion in automatischen Transkriptionen einer Software? [de]

Résumés

Cet article appréhende les enjeux relatifs aux transcriptions pour la connaissance scientifique en se focalisant sur des logiciels de transcription automatique appelés speech-to-text. L’article présente le fonctionnement général et les usages des speech-to-text, puis s’attarde sur le travail d’évaluation d’un logiciel spécifique réalisé en tant que sociologue. Cette évaluation a permis d’ouvrir la boîte noire du logiciel en déterminant de façon qualitative les sources de ses erreurs systématiques. Si le logiciel fournit incontestablement un environnement sémantique fiable, il rencontre deux types de problème de nature distincte qui sont confondus dans la littérature computationnelle : ceux générés par les limites des modèles informatiques du logiciel, pouvant se résoudre en améliorant les modèles du système, et ceux liés à la conversation ordinaire, d’ordre praxéologique, qui échappent à la machine. Une comparaison détaillée avec des transcriptions d’analyse de conversation montre comment le logiciel fait disparaître l’interaction de ses transcriptions.

Haut de page

Texte intégral

Introduction

  • 1 This research, entitled Scientific Expertise and Media Discourse, was funded by the Initiative for (...)

1This article offers an understanding of the issues related to transcriptions by focusing on speech-to-text software designed to ensure the passage from speech to text in the automatic processing of oral sources, based on interdisciplinary research conducted in French-speaking Switzerland in collaboration with Radio Télévision Suisse (RTS), Avis d'Experts and the newspaper Le Temps1. The transformation of spoken language and its accompanying phenomena into written form is a fundamental issue in the social sciences. For this reason, the formatting of in situ observations, interviews or even media documents is a necessary step in any field investigation. The choices made for this formatting will partly determine the way in which the data will be constituted, analyzed and the knowledge elaborated.

  • 2 All translations from French are mine.

2Writing cannot be reduced to a simple tool for the literal reproduction of enunciation; like all tools, it is at the same time a mediation, and thus a translation (Latour, 2005), but also a language practice, a means of communication and an intellectual technology that acts on social uses, creativity, scientific inventions and thought (Goody, 1979).2 The transcription of the spoken word is therefore never a trivial act. It is no coincidence that the question of transcriptions has become such a central issue in the field of sociolinguistics; it has become even more so since the development of recording media, which allow for increasingly careful transcription and a systematic return to the sources, thus modifying the research thoroughly (Mondada, 2007).

3The issues with transcription as discussed in the social sciences are diverse and often interrelated, although treated separately:

  • methodological and epistemological when it comes to thinking about how our relationship to recorded sources transforms “the observables” and “the phenomena available for linguistic analysis” (Mondada, 2007, p. 145);

  • political when the debate is about how interviewees appear in the texts in order to avoid a “stigmatization effect” (Bourdieu, 1996);

  • technical if we think of the plethora of computer tools developed in recent years to facilitate and refine transcription work in the human and social sciences (Rioufreyt, 2018; Tancoigne et al., 2020).

4Given all of these stakes, the addition of extra mediation in the work of transcription requires rigorous attention to the choices that the techniques prefigure in the production of data. All research implies choices. There is nothing technophobic about this statement. It rather invites us to examine the consequences of these choices and thus to become “aware of the limits and possibilities inherent in the various techniques of the intellect” (Goody, 1979, p. 57).

5First, I will present the general functioning of speech-to-text and its uses. Second, I will describe the preliminary evaluation work carried out in the framework of the above-mentioned research concerning the automatic transcriptions produced by a specific speech-to-text software, VoxSigma offered by Vocapia Research.3 This evaluation not only attested that VoxSigma provides a reliable semantic environment for its users, but also qualitatively identified some sources of systematic errors. My aim in this article was to gain a deeper understanding and detailed analysis of what generates errors in automatic transcriptions. I did this by comparing the preliminary evaluation work with both the computational literature on the subject and with transcriptions manually performed in the manner of conversation analysis. The analysis revealed two types of problems of a distinct nature that structure the presentation of the article’s core: (1) the problems generated by the limitations of the computational models embedded in speech-to-text; (2) those related to ordinary conversation.

6We will see that this synthesis reminds us of an opposition, formulated by Quéré (1991), between two models of communication: (1) an epistemological model, conceiving language as a transmission of information between speakers who would proceed, as machines do, by encoding and decoding in order to understand each other; (2) a praxeological model, referring to the way in which the interaction is co-constituted by speakers in a situation. We will thus see that, fundamentally, apart from issues strictly in computing, speech-to-text face an impasse regarding conversation: it is impossible for the machine to actually apprehend and then transcribe the interaction order co-constituted through the speech-exchanges. Finally, before concluding, I will add a point concerning a particularly interesting source for error in the transcription of person names that seems to lie at the crossroads of these two types of problems.

Opening the speech-to-text black box

7Speech-to-text software belong to the field of Natural Language Processing (NLP), which encompasses all uses of a “computer to automatically process written texts and spoken productions in natural language” and lies “at the interface between linguistics and computer science” (Silberztein, 2019, p. 7). Speech-to-text are thus included in NLP research and often associated with other types of software but by far does not cover all the tools that make up this field. The aim is to substitute human labor for a task considered “[i]nescapable but time-consuming” (Rioufreyt, 2018, p. 97). Their development is part of a wish to generate automatic transcriptions independent from human perception of recorded sources. In this regard, the computational literature tells us that “researchers have been fascinated with the idea of machines that could hear and understand audio content just like humans do, referred to as ‘machine listening’” (Gauvain et al., 2019, p. 87). In reality, the state of the art on speech-to-text and, more generally on NLP, is still far from this ideal of complete automation.

8Indeed, the development of speech-to-text involves a “training phase” for the systems on corpora that include not only written texts, usually taken from the Internet (Vasilescu et al., 2014; Roy et al., 2015; Gauvain et al., 2019), but also transcriptions and annotations of “manually” transcribed oral sources (Lamel et al., 2011; Poignant et al., 2012; Despres et al., 2013; Clavel et al., 2013; Vasilescu et al., 2014; Bredin, Roy et al., 2014; Fraga-Silva et al., 2015; Roy et al., 2015). Moreover, machines that are derived from a “NLP solution,” such as chatbots employed by some businesses, involve the demanding work of “continuous supervision of the system, which is readapted according to the actual situations of use” (Esteban, 2020, p. 199). Human skills related to the understanding of language are still indispensable, both upstream and downstream in the design of the machine, for its proper functioning. On the other hand, every year, significant refinements of speech-to-text are completed. This improvement responds to situations that require the processing of corpora including considerable quantities of material, notably media, but also in the contexts of phone conversations or interviews in marketing (Clavel et al., 2013), judicial (Gauvain et al., 2019), document archive management (Alquier et al., 2017) or even human and social sciences research (Tancoigne et al., 2020).

  • 4 Latour uses the term “technical folding” (pliage technique) to refer to the mode of existence of th (...)

9Insofar as the use of a speech-to-text involves the service provided by a third party external to its contexts of use, it seems important to retrace the “networks of mediations” (Latour, 2005, p. 136) that constitute it, in particular its computer models and the model of communication (Quéré, 1991) that presides over its design. The point is to avoid leaving this software in the state of a “black box” that “can be easily forgotten” (Latour, 2005, p. 39). Indeed, once delivered for use, a speech-to-text software produces texts whose "technical folding" (pliage technique) (Latour, 2010, p. 29) tends to become opaque or even non-existent for the users, who could consider it as a simple “intermediary” step in the inquiry.4 In fact, we will see that speech-to-text do modify the “elements” of language “they are supposed to carry” (Latour, 2005, p. 39) and it is possible to make them visible by enlightening the problems encountered by the software. According to Latour's lesson, it is indeed when a technical object “breaks down” (p. 39) or poses problems that it fully appears as a “mediator.”

A praxeological gaze on speech-to-text

10This article focuses on the VoxSigma transcripts that were used in the interdisciplinary research project Scientific Expertise and Media Discourse. The aim of this research was to develop a computer prototype, for journalists and the general public, capable of producing “semantic maps” connecting scientific experts to topics discussed in the media. The objective in this research was to facilitate and enrich the circulation of knowledge between the University and the City in a context of misinformation and mistrust of media and scientific institutions. The prototype was designed to produce maps connecting written words from media sources, and thus audio and video for the corpus provided by Avis d'Experts. These sources immediately made it necessary to have access to their transcripts. The solution was, in a sense, ready-made, since RTS had a contract with Vocapia for archiving and more recently for research purposes. It was therefore possible to use the transcripts created by VoxSigma to introduce the RTS oral media sources in the development of the prototype.

11A preliminary step of qualitative examination of the transcribed texts appeared essential to evaluate the automatic transcriptions and to make sure that the errors encountered would not cause, later on, counter-meanings in the “semantic maps.” I was assigned to perform this evaluation by comparing the texts provided by Vocapia with the original audio and video sources. The first task was to identify systematic errors that emerged from the comparison between reading the transcripts and listening to the radio and television shows, based on a qualitative corpus of 68 broadcasts covering a wide variety of formats (interviews, reports, debates, mixed). The notion of error used during this stage corresponds to that of Vocapia, which uses a widespread metric to measure the reliability of an automatic transcriber: The Word Error Rate.5 According to this metric, the error is detected in three cases: when a word is (1) substituted for the spoken word, (2) deleted from the transcription, or (3) inserted without being spoken.6

12When an error was identified, it was also a matter of qualitatively determining its source by asking, “What happens in the situation when an error is made by the software?” A list of fifteen sources for errors was created using this method, without knowing the computational literature on such software at that time. Despite these errors, the work of the software was evaluated as largely sufficient for this project, since the unit of analysis for the prototype is the word – the common noun specifically. It appeared that VoxSigma provided a reliable semantic environment. It is thus always possible to know what one is talking about and to identify thematic fields clearly and precisely.

13Nevertheless, beyond this project, a detailed analysis of these fifteen sources for errors appeared important to unfold further, in order to contribute to studies on automatic transcriptions, from a sociological and conversational point of view in particular. Indeed, after a thorough search of computer knowledge on speech-to-text, it turns out that a posteriori evaluation of such software is usually performed through other software. Although the latter are also trained on manual transcriptions, they generate error reports in a probabilistic way, without being able to identify precisely the causes of these errors, which then remain in the state of hypotheses. On the contrary, the evaluation of this speech-to-text as social scientist, and not as computer scientist, allowed us to better target the causes of the problems encountered in order to formulate them at a practical level, thanks to contact with the original sources, and to apprehend the consequences of our conceptions of communication and language for the production of scientific knowledge.

  • 7 Vocapia has a web page giving access to the PDFs of many articles referring to the state of the art (...)

14In this article, the fifteen sources for errors identified in the preliminary evaluation were first compared to the public descriptions of VoxSigma provided by Vocapia and to observations found in the computational literature on speech-to-text.7 The aim was to examine whether each source for error was considered and, if so, how it was dealt with from a computational point of view. This research provided an understanding of how computer scientists conceive of software work and its problems. From a sociological point of view, the sources for errors fall under two types of general problems of a different nature that are not distinguished in the computational literature: (1) computational – the problems generated by the limits of software models; and (2) interactional – those related to ordinary conversation, which falls under a praxeological model of communication that defies computer science.

15The following table summarizes the list of sources for errors drawn up in the preliminary stage (column 1) and shows what type of problem they are associated with from a computational point of view (column 2) and from a sociological point of view (column 3). I now propose to examine in detail these two types of problems successively and to open a discussion about what their perception implies from a computer science point of view. For the social scientist, lines 1-10 of the table refer to Type 1 of a computational nature, and lines 11-14 refer to Type 2 of an interactional nature. For the analysis of this second type, a detailed comparison with transcripts in the manner of conversation analysis is provided. Line 15 is hybrid and is discussed separately before the conclusion.

Table 1: List of the sources for errors

Qualitative determination of the source for the error

Type of problem for the software from a computational point of view

Type of problem for the software from a sociological point of view

1

Specific vocabulary

(technical, regional, word from another language)

Dictionary

Type1-Computational

2

Acronym

Dictionary

Type1-Computational

3

Number

Dictionary

Type1-Computational

4

Language-change

Language recognition

Type1-Computational

5

Translation

Language recognition / Signal quality

Type1-Computational

6

Environmental condition

(wind, rain, background noise)

Signal quality

Type1-Computational

7

Technical management

(speaking too softly, too loudly, too close or too far from the microphone)

Signal quality

Type1-Computational

8

Non-verbal sound sequence

(music, laughter)

Acoustic event

Type1-Computational

9

Pronunciation

(stammering, hesitation, curtailing of a word, fluctuating flow)

Way of speaking

Type1-Computational

10

Sentence utterance dissociated from standard writing

(repetition of a word, grammatical error)

Way of speaking

Type1-Computational

11

Enumerations

Type2-Interactional

12

Overlapping speech

Signal quality

Type2-Interactional

13

Speaker interruption

Signal quality

Type2-Interactional

14

Change of speaker

(rapid alternating turns at talk or frequent change of speakers)

Signal quality / Identification and labeling of speakers

Type2-Interactional

15

Person names

Dictionary/Way of speaking/ Identification and labeling of speakers

Hybrid

List of the sources for errors determined qualitatively and associated with the types of problems for the software from a computational and sociological point of view

Type 1 problems related to the limitations of computational models

16The first type of problems gathers the sources for errors related to the models implemented in the Automatic Speech Recognition (ASR) system, also called speech-to-text conversion, and is defined as the “[p]rocess by which a computer convert[s] a speech signal into a sequence of words”.8 Usually, “[a]ll speech-to-text systems rely on at least two models: an acoustic model and a language model. In addition large vocabulary systems use a pronunciation model”.9 More precisely, from a “speech signal,” the acoustic model allows the system to associate the captured language information with phones. The language model calculates the probability and frequency of word sequences to be produced according to the syntactic and semantic constraints of a language. In principle, a speech-to-text system includes, in addition to these two models, a lexical or “pronunciation dictionary” which includes all the words known by the system and to which one or several possible pronunciations are associated. This type therefore gathers the sources for errors that refer to the limitations of these computer models.

  • 10 The T refers to the table, and the number refers to the source line of the error addressed in the d (...)

17Words that are not supported by the pronunciation dictionary are logically poorly transcribed. In the corpus that I evaluate, these are unusual terms, such as aged terms ("jarnidieu"), technical terms, such as "hypominéralisation" (hypomineralization), regional terms, such as the name of a lake ("Lac Léman"), and terms coming from another language than French, such as “fake news” (T110). In these cases, speech-to-text makes probable suggestions based on pronunciation to complete the sentences. To solve these vocabulary problems, it is sufficient to add the missing words to the system's dictionary. Speech-to-text also encounters difficulties with language elements that are not words strictly speaking, such as acronyms (T2), for instance “COP21,” and numbers (T3), especially large numbers, which are reported in numbers and not in letters. Thus, “1200” will be transcribed as “1000 200.”

18Moreover, these models are always specific to a given language. There is no universal software. In order for a word from another language to be recognized, it must inevitably be integrated into the dictionary of the language with which the speech-to-text program works. With VoxSigma, it is possible to enter the language before starting the transcriptions. In case it is not known, VoxSigma has an automatic spoken language recognition component that is able to identify forty different languages from a speech signal.11 However, once the system is launched in a language, it does not go backwards. Until recently, researchers assumed that an audio document contained only one language (Gauvain et al., 2019, p. 82), which is far from always being the case.

19In the literature, the matter is now discussed in terms of “Code-switching in Multilingual Communities” (Barras et al., 2020, p. 1). Cases of code-switching are common and can be discrete if they appear in the middle of a sentence. In the evaluated corpus, the mere mention of the title of a book in its original language is a code-switching situation for the system. For instance The Physics of Heaven was transcribed as “les ex-à 20” (the ex-to 20) (T4). Multilingual contexts are particularly problematic as entire sequences may be stated in languages different from the main language or dialects (Gelly et al., 2016).

20As Switzerland is a multilingual country, the evaluated corpus was likely to pose this kind of problem to the software. Nevertheless, it refers to programs broadcasted on the main national public channels. The utterances in non-French languages are translated in one way or another and most often vocally, which is essential for radio. On the soundtrack to be transcribed, the beginnings of sentences are enunciated in another language (national or foreign). The software then does not recognize the change in progress and makes erroneous proposals in French. Next, a voice in French is superimposed on the principal voice. In principle, the speech-to-text matches the French voice-over, which produces awkward sentences because of the problematic propositions at the beginning (T5).

21The problem of translation does not seem to have been identified yet in the literature. This is because it is mixed with another problem related to the quality of the recordings: with voice-to-voice translation, the software must transcribe a blurred audio signal, and the quality of the transcriptions depends largely on the clarity of this signal. The problem with the conditions of the collected oral sources is not specific to speech-to-text, it concerns any software to help transcriptions (Rioufrey, 2018, p. 103). Thus, there are specialized tools that can process the quality of the speech signal (p. 103), but speech-to-text does not have such tools built into its system and must therefore be combined with other software.

22In the computational literature, the question of sources with a lot of background noise that disturbs the reception of the audio signal arises. VoxSigma does indeed multiply the errors for outdoor reports with wind or rain noise (T6). The same is true when a person speaks very softly, too close or too far from the microphone (T7). The literature also mentions “acoustic events” that should be identifiable; in the case of terrorism data, these are sounds of “explosions, shootings (gunshots and or machine guns)” (Gauvain et al., 2019, p. 88). VoxSigma is restricted to speech and does not label other types of acoustic events. Laughter or music are treated as pauses, indicated by blanks left in the transcription, which do not allow readers to know why the speech stopped (T8). In the case of songs, the software may capture words on the fly and place them in the middle of the pauses. Again, this point is made in the article on the challenges of terrorist propaganda data, where the “speech partitioning is perturbed by the strong presence of chanting and preaching that can easily be mistaken for speech” (p. 86).

23This last contribution is particularly interesting in that it is based on data that put speech-to-text to the test. This type of data is not often used to test speech-to-text, even though it sheds light on software’s weaknesses. Media sources are usually privileged, as sound is the subject of a substantial amount of work and audio quality problems are usually minor compared to other types of sources. Nevertheless, from a qualitative point of view, these elements are also found in media productions. When present, they cause difficulty for the software. According to Vocapia, there are always errors in speech recognition: “the speech recognition accuracy varies greatly upon a large number of factors, including type of speech (from prepared to spontaneous speech and conversational speech) and the noise level”.12 The dependence on the speaker is also fundamental, as the software is not only sensitive to noises but also to ways of speaking, including vocal and discursive idiosyncrasies that deviate too much from a standardized version of a language. The most frequent errors encountered in the evaluation of VoxSigma are precisely those vocal singularities (T9-10).

24These singularities have been well documented by Rioufreyt:

more or less pronounced local accents, variation in voice modulation (stammering voice of some elderly people, enunciation in critical affective situations: crying, anguish, stupor, etc.), diction problems (palatal deformation, stuttering, lisping), etc.

25We can also add people who “tend to chew their words” (2018, p. 102). In the case of speech-to-text in particular, French mistakes are also a problem. In the evaluated corpus, accents (which are numerous in Switzerland) did not systematically multiply errors; for people with a strong accent but who articulate well and use a slow flow, without making grammatical errors, transcription can be excellent. On the other hand, the speech of people with a weak accent who hesitate a lot will be very poorly transcribed. Some difficult-to-pronounce words are sometimes recognized by the human ear, although the automatic transcription produces an incorrect transcription, such as "ecclésiastique" (ecclesiastic).

26In the computational literature, this question of how one speaks and the paralinguistic dimensions of language is considered from the perspective of emotions but also, for example, voice changes that occur when one has a cold (Wagner et al., 2017). In short, one thing is certain: computational models are not suitable for data with too many singularities. This is highlighted by Gauvain et al. for terrorism data that include

strong accents of non-native speakers [that] are sources of many errors. The speech also contains many hesitations and grammatical errors that do not match well with the language models. (2019, p. 86)

27Crucially, speech-to-text systems struggle with anything that deviates from a standardization of language based on its “graphic reason” (Goody, 1979). Indeed, software seeks to produce a quasi-literary text. To avoid making any mistakes, it would need a speaker who would literally speak like a text.13 The anchor of a TV or radio newscast is therefore the prototype of the ideal speaker for VoxSigma14 : they read their prompter with the minimum of accent and emotion possible and in a clean sound environment. In short, all phenomena to be transcribed that deviate from a speech prepared in advance and read orally – any form of text-to-speech – challenge speech-to-text.

Type 2 problems related to impasses of computer science regarding interaction phenomena

Praxeological impasses of the text-to-speech-to-text model

28The second type of problems concerns the difficulty of speech-to-text in identifying and managing interactions. Indeed, it becomes very imprecise when it has to deal with overlapping speech (T12), interruptions (T13), speaking in rapidly alternating turns and frequent changes of speakers (T14). This concerns all discursive phenomena involving an organization close to that of ordinary conversation, which the machine tends to consider as noise. If a news anchor is the ideal type, it is not only because they read a written speech, but also because they are the only one speaking.

29Computer scientists recognize these difficulties posed by speech-to-text. Vocapia warns its clients that the results obtained will be much less accurate for sources that contain “very casual conversations”.15 Lamel et al. also indicate that comparisons of the word error rates produced by the software between years 2010 and 2011 are relative, because recent corpora “contain a much larger proportion of conversational speech” (2011, p. 128). Only one text in the computational literature deals with conversational data only – in this case, telephone exchanges in a marketing context (Clavel et al., 2013). The authors offer the most extensive development I have read on the “spontaneous” and “interactive” dimensions of conversation and the problems they pose for software. Significantly, they focus on the “constant risk of deterioration in the quality of the message elaboration, including pronunciation and syntax” (p. 5). This “risk of deterioration” caused by conversation is categorized as the same type of problem as environmental noise. The computational literature does not, then, differentiate between problems that specifically affect interaction. Therefore, it is appropriate to show how they differ fundamentally from Type 1 problems.

30To show how VoxSigma handles interactions, I selected four excerpts from the media sources of the original corpus that I transcribed manually for comparison to the automatic transcriptions. These excerpts were chosen because of the gaps they produced in my perception between reading the texts and my contact with the media material. These transcriptions were made on the basis of the conventions used by conversation analysis (CA), since this approach aims to apprehend conversation as an “ordered phenomenon” (Mondada, 2008, p. 882).

31Moreover, the conception of language that inhabits CA is the opposite of the one that forms the text-to-speech-to-text computer models. This computer conception of language refers to a model of communication that Quéré would qualify as “representationist” or “epistemological,” for which communication would be “a matter of acquisition, transmission and processing of information, that is to say of elaboration, diffusion and reception of representations” (1991, p. 73). Following such a model, the act of communicating would be “to arouse in a recipient representations or ideas similar to those that are in the mind of the one who delivers the message” (p. 73). One recognizes well the form of the computer language, which functions “in terms of encoding and decoding of messages” to guarantee the “success of the communication” (p. 73), referring thus to the logic of the automatic models. Moreover, this approach to communication echoes the language model projected by computer scientists on the course of human language exchanges. This projection can be seen in their way of apprehending the conversation as a noise that deteriorates the signal of the message to be transmitted and decrypted not only for the software, but also between speakers. Following computer scientists, the message is precisely the object that the machine’s objective is to extract and transcribe.

32This “posture of objectification” (p. 74) tends to lead automatic transcriptions to be seen as having a certain independence from their sources. According to this posture, the process of conversion operated by the software would not transform the message that circulates in itself; the message converted into text would be the same as the one spoken. Hence, the transcription is perceived as having a form of autonomy relative to its source. Significantly, the first files transmitted by VoxSigma did not contain links to the media sources. I had to request that each transcript include metadata referring me to its sources. This autonomization contributes to de-situate and de-temporalize the language; it proceeds to a de-indexicalization.

  • 16 This “inseparability between primary and secondary data” (Mondada, 2007, p. 145) does not imply a r (...)

33On the contrary, CA transcriptions do justice to a “praxeological dimension” of language (Mondada, 2008, p. 888) by considering the transcribed text as inseparable from its sources. This guarantees “a work done in constant reference to the recorded data” (2007, p. 145).16 For conversationalists, the practices of analysis and transcription are equally inseparable and require “working with data of which one has in-depth knowledge” (Mondada, 2008, p. 884). The “praxeological communication model” that drives CA then refers to a conception of language no longer as a message circulating from point A to point B, but rather as “a joint activity” (Quéré, 1991, p. 76) accomplished in a situation. At the transcription level, we move from an epistemological model that produces a text in “isolated and more or less idealized forms” to a praxeological model that shows “forms-in-time” and “forms for action structured by their emergence process” (Mondada, 2007, p. 145).

34As a result, CA helps to “re-temporalize and re-situate language” (Mondada, 2007), by favoring transcriptions that preserve the marks of orality rather than the standardized rules of writing. CA respects long silences, pauses in the middle of a sentence or word, intonations, overlaps, or interruptions. The result is necessarily not very readable without access to the sources and without training in reading such conventions. In this respect, CA transcriptions are indeed opposed to speech-to-text transcriptions; these software programs prioritize standard grammar, which “offers more readability and allows automatic analyses based on standard orthographic conventions” (Rioufreyt, 2018, p. 109). On the other hand, as we will see, CA transcriptions make interactions if not readable, at least visible, whereas automatic transcriptions dissolve them and make them disappear.

The anchor’s discourse limited to Type 1 problems

  • 17 Climat: la grande hypocrisie ?” (Climate: the great hypocrisy?), show title: Infrarouge, December  (...)
  • 18 For cross-references in the analyses, STT or CA designate the type of transcription and the number (...)

35Among the four selected excerpts, the first three refer to different moments of the same television debate show.17 The first one corresponds to the introduction of the show by the presenter, who produces a type of discourse very close to that of an anchor, which is perfectly adapted to VoxSigma. Keeping its promises, the software (STT) provides a reliable general semantic environment around climate challenge18 :

Figure 1: Excerpt 1 STT

Figure 1: Excerpt 1 STT

Illustration credit: Marine Kneubühler

36The general form of the interaction, between the talkative presenter and the absent (and therefore mute) public whom the presenter addresses, is similar across the STT and AC renderings. Both transcripts produce a visual form in one block of text:

Figure 2: Excerpt 1 CA

Figure 2: Excerpt 1 CA

Illustration credit: Marine Kneubühler

37One might expect to encounter this form for a television newscast, as well. In detail, it appears that the discursive activity performed in the debate format is not exactly the same – it consists not of presenting a list of topics, but a single topic with a list of guests and points of view. This reveals the problems encountered by the software with person names (T15). Here, more than one out of two persons names are wrong or not recognized as names.

38The other errors made by the software are only from Type 1. These are language-change problems (T4) – “Swiss Youth for Climate” (CA21) becomes “Swiss Youssoupha clarinette” (Swiss Youssoupha clarinet STT16). Some acronyms are transcribed incorrectly (T2) – “COP” (CA6) became “COB” (STT5). Pronunciation problems (T9) were also present. There are two examples of pronunciation issues here: the very first word spoken has a strong raising intonation, is truncated in the middle, and the end is strongly emphasized (“no/- (.)tre” (o/-(.)ur) CA3). The software ignores it (STT2). The number 24 after “COP” (“vingt-qua::tre” (twenty-fou::r) CA6) is pronounced with a prolonged vowel and ends with an emphasis. The software suggests “vendu 15 votre” (sold 15 your STT5). On the other hand, the non-language acoustic events (T8, signature tune and applause CA1-2) were not recorded by the software.

Smoothing of situated activities

39The second excerpt comes a few seconds after this introduction. In contrast, it makes the erasure of interactions in the automatic transcriptions evident. In the STT version, this excerpt is presented in two blocks linked to two different male speakers:

Figure 3: Excerpt 2 STT

Figure 3: Excerpt 2 STT

Illustration credit: Marine Kneubühler

40There are indeed two male speakers conversing at this point in the show. However, the form of the interaction does not correspond to two clearly distinguishable segments of speech. The CA transcription renders the interaction in a completely different form than STT, even allowing us to distinguish several types of activities and sequences that play out between the two interlocutors. At the semantic level, STT1-11 refers to CA1-14 and STT12-17 to CA16-30.

Figure 4: Excerpt 2 CA

Figure 4: Excerpt 2 CA

Illustration credit: Marine Kneubühler

41Visually, it is very clear that the two blocks of the STT mix the two speakers’ alternating turns. First, we observe in the CA version that the presenter initiates a greeting sequence composed of two “adjacency pairs” (Sacks et al., 1974, p. 710) with a designated guest (CA2-8). He introduces this sequence by first alluding to the guest and indirectly mentioning his presence and his Nobel Prize-winning status (CA2-3). Then, he opens the pair of greetings by addressing him by name: “good evening Jacques Dubochet” (CA4). The guest completes this adjacent pair and ratifies the interpellation by answering, “well good evening” (CA5). Without a pause, the presenter produces a new adjacency pair related to the first one, this time of a question/answer type with “are you doing well/” (CA6), also ratified by the guest, who answers “very well” (CA7), thus allowing the presenter to close this sequence (“all the better” CA8). After a pause, he can then start formulating a longer question on the debate’s topic. This interactive sequence has completely disappeared in the STT version, and the words that compose it are proposed linearly in the presenter's turn (STT3-5).

  • 19 A “multimodal transcription” (Mondada, 2007, p. 151) would show that the guest also uses his gaze a (...)

42The same is true for the other sequence, which corresponds to an on-set activity, as the guest completes a new question/answer pair about the debate’s topic. For the guest, it is a matter of making the presenter understand that this is the moment to display a graph to accompany his explanation, a graph that is not yet “there” (CA18). Interestingly, the deictic, which typically refers to the situated dimension of language, is absent from the STT version (15). Without going into conversational detail, to which we would need to add a multimodal dimension to complete the analysis,19 we already notice that the absence of the graph that “should be there” makes the two speakers produce overlaps (CA18-19; 20-21; 22-23; 25-26) and rapid alternating turns (CA27-29) that the STT ignores. As with the greeting sequence, the software merges the captured words into a single turn (STT12-17) that is differentiated from the first block, probably because of the length of the pause (CA15) that the guest makes before responding. STT only recognizes that the speaker before the pause is not the same as the one who then resumes speaking.

Moment of intense debate: a simulacrum of interaction

43The third excerpt is very interesting because its STT version shows a rather rapid alternating speach that visually evokes an interactive form between several speakers, lasting 1'34’’. The software distinguishes nine blocks that it considers to have been produced by five different speakers, two females (FS9 and FS8) and three males (MS16, MS14 and MS3). Two of them are assigned three blocks each (MS16 and MS14).

Figure 5: Excerpt 3 STT

Figure 5: Excerpt 3 STT

Illustration credit: Marine Kneubühler

44The CA version captures the fact that this is a very intense moment of debate where four speakers – not five, and including only one woman – interrupt each other a lot, especially after one guest’s self-selection (CA7) to respond to another guest (CA1), who is herself interrupted by the presenter (CA2-3). In fact, the machine duplicated two speakers: PAX, identified as MS16 (STT5-6) and MS3 (STT38-40); and IAT, identified as FS9 (STT1-2) and FS8 (STT32-35). It also made another speaker disappear, IBG, identified as MS16 (STT19; 27-29), as PAX. IBR corresponds more or less to MS14 (STT9-10; 13-16; 22-24). The latter approximation is due to the fact that none of these nine blocks, with the exception of the two successive blocks of MS14 (STT9-10 and 13-16), actually refer to a single speaker's turn.

Figure 6: Excerpt 3 CA

Figure 6: Excerpt 3 CA

Illustration credit: Marine Kneubühler

45Usually, speakers ensure that only “one party [talks] at a time” (Sacks et al., 1974, p. 729) and produce “repair mechanisms” (p. 701) when this is not the case. The conversational problem that arises in this excerpt is that the self-selection of a guest, rare in a presenter-moderated debate, is allowed by PAX: “yes Bernard Rüeger” (CA9). It then allows IBR to develop the longest sequence of the excerpt without interruption (CA10-16). This exception will pave the way for multiple attempts at self-selection instead of repairs.

46This kind of extreme interactional phenomenon, even regarding conversation, usually causes the machine to go off the rails. The machine does show an interactive form in its transcription, but it actually generates a simulacrum of interaction. Seven blocks of the STT version merge the talk of several different speakers: STT1-3 corresponds to CA1-4 (IAT and PAX), STT5-6 to CA5-9 (PAX and IBR), and STT19 to CA17-20 (PAX, IBG, and IBR). Here, the STT thus conflates three participants into a single line. STT22-24 refers to CA21-25 (IBR and PAX), STT26-30 to CA27-34 (IBR and IBG), STT31-36 to CA35-47 (all four speakers) and STT38-40 to CA49-52 (IBG and PAX). Two lines that are in the CA version (26 and 48) were skipped in the STT version altogether. Both refer to attempts at self-selection by IBG that were ignored by the other speakers, and the presenter in particular. Finally, even within the single uninterrupted turn, the software produces a quirk and splits it in two, even though it does not include a longer pause than otherwise (the STT12 line break refers to CA11’s second micro-pause).

The elusive interaction: a sequential problem

  • 20 "La saison de la pêche" (The Fishing Season), show title: L’oreille des Kids (The Kids’ ear), May 1 (...)

47The last excerpt no longer refers to a debate but to a science show for children.20 All of the STT transcripts for this show are poor, sometimes even extremely poor. First, because lots of children participate, and their expressions do not fit standard speech. Second, because adult speakers perform sequences that simulate spontaneous forms of conversation. In the excerpt in question, the two participants, the broadcaster and a female expert, produce a playful explanation of a technique for counting fish in a lake. On the semantic level, the topic of fishing is recognizable, but it is mixed with terms that have no place there, such as “police” (STT21), “vaccines” (STT27) or “Qatar” (STT31). The software, lost in conversational transitions, makes probable proposals to fill in the blanks.

Figure 7: Excerpt 4 STT

Figure 7: Excerpt 4 STT

Illustration credit: Marine Kneubühler

48The CA version of this excerpt is shorter than the STT version presented and stops at STT7:

Figure 8: Excerpt 4 CA

Figure 8: Excerpt 4 CA

Illustration credit: Marine Kneubühler

49From this short CA excerpt, one can already clearly perceive the disappearance of the conversational form in the STT version. However, there are no overlaps, no interruptions, and rapid alternating turns are not always the rule. Moreover, it is an exchange that is not only very easy to understand, as it is intended for children, but it is also easy to transcribe, compared to Excerpt 3, which is particularly demanding for the human transcriber. The rest of the CA excerpt, in its visual form, would have been very close, with a perfectly balanced exchange between the broadcaster and the expert. However, in the STT version, the software does not even recognize the difference between the female and the male voice and transcribes an erroneous exchange between two male speakers (STT10-32).

  • 21 VoxSigma is nevertheless considered to be efficient in transcribing “spontaneous speech” (Tancoigne (...)

50This excerpt is very useful for highlighting the specificity of Type 2 problems compared to Type 1, two types that the computational literature gathers. At best, conversation is considered to be an amplifier of pronunciation problems, but in the same way as vocal singularities. However, it is conceivable that a person could practice his or her way of speaking to better correspond to the standardized way – to smooth out his or her accent or pronunciation, and not to produce hesitations – and then to allow the software to better transcribe it. On the other hand, Excerpt 4 shows that even a conversation that is clearly prepared in advance and meticulously rendered, thus simulating the spontaneous dimension of the exchange, does not allow the machine to do better.21

51Therefore, we can characterize these type 2 problems as sequential problems. However, they concern different sequences than those computed by the language model. Indeed, the unit of these sequences is the word, as the use of a dictionary for the system suggests. However, the kind of sequence in this second type of problem is rather the sequence whose unit of analysis is the turn at talk in the conversationalist sense, which “is not a grammatical unit such as a wording or a sentence, but an interactive unit co-constructed by the conversants” (Relieu & Brock, 1995, p. 82). Thus, the semantic environment, which is the object to be transcribed for the software, is not the sequential environment that the CA transcriptions allow us to visualize. The probable word sequences computed by the speech-to-text language model do not correspond at all to the concrete interactive sequences locally performed by the speakers. On the contrary, this model makes them disappear. It is thus possible to grasp that the language skills of the machine are situated in a different world from that of human skills; understanding for the human does not mean “understanding” for the machine.

52Indeed, from a praxeological point of view, a talk produced in interaction is always designed in such a way as to “display an orientation and sensitivity” to its recipient(s) (Sacks et al., 1974, p. 727) and not to a decoder. In ordinary language understanding, the interactively determined “turn-taking organization at least partially controls the understanding of utterances” (p. 728). With the allocation of the guest’s turns by the presenter within greeting and question/answer sequences, Excerpt 2 shows that what matters for the participants at this very moment is not the content of a message – which is in this instance strictly phatic and without importance for the debate’s topic – but to ensure the projectability of next turns, which are performed within the situation, in this case, for the public of the show. Sacks et al. put it well: “while an addressed question requires an answer from the addressed party, it is the turn-taking system, rather than syntactic or semantic features of the ‘question,’ that requires the answer to come ‘next’” (p. 725).

53This projectability is precisely what allows the construction of a turn as a relevant unit for the participants:

The main characteristic of the units used to construct a turn is thus that they allow a projection of the typical current unit (a question, an answer, an acceptance, and so on); the next recipient or speaker can thus ‘analyze’ during the production of the turn what it consists of and when its possible end will occur; the current speaker can thus deliver in advance indications of a possible speaker-change (Relieu & Brock, 1995, p. 82).

  • 22 Thus, in the rules listed by Sacks et al. (1974) several have this form: “...is not fixed but varie (...)

54Projectability does not mean programmed or possibly programmed because this form of anticipation always takes place endogenously to the action. Thus, contrary to the first type, this problem could not be solved by improving the models currently implemented in the automatic system. Indeed, the interaction does not obey the epistemological logic that prevails in software designs; it has to do with a completely different system, that of “turn-taking” which is “a system for ‘sequences of talk’” (Sacks et al., 1974, p. 710); these sequences resist a priori definitions, since they are a matter of “practical accomplishment of members” (Mondada, 2008, p. 888). This interactional system may be called “machinery” (Sacks et al., 1974, p. 725) because of its systematicity, the practical implementation of its constitutive rules can hardly be anticipated “exogenously” (Mondada, 2008, p. 888), as any machine programmed according to an epistemological communication model would require. In other words, the specificity of these rules lies in their possibility of being recognized in the very course of the exchange, while being able to undergo numerous variations according to the singularity of each situation.22 The question then is to what extent could computer models be designed according to a praxeological logic, capable of embracing this point of view endogenous to the interaction?

The intriguing automatic transcription of person names

55With these two types of software's problems clearly identified, there remains one case to be examined which is difficult to classify by one or the other: the errors due to the utterance of a person’s name (T15). This problem is a priori a problem of vocabulary, and therefore of Type 1 in part. Thus, famous people have an advantage (Bredin et al., 2012, p. 387) because they are likely to be referenced in pronunciation dictionaries. For example, this is certainly the case for Jacques Dubochet, a Nobel Prize winner in chemistry who is well transcribed (see excerpts 1 and 2). However, it can also happen that celebrities have their names misspelled.

56Automatic transcription of person names is a major problem and is widely considered in the computational literature, which reports huge error rates compared to manual transcriptions, ranging from 17% errors for the latter to 75% for software (Bredin, Roy et al., 2014, p. 2 ; Bredin, Laurent et al., 2014, p. 1). Poignant et al. (2014) identify three likely sources for errors: pronunciation, “detection of names close to common language words,” and “difficulty in excerpting a person's full name when only part of it has been pronounced (e.g., only the first name)” (p. 49). The automatic transcription of person names is treated separately in the literature because it is closely related to the problem of speaker identification, which is rather a type 2 problem. We have seen that change-speaker phenomena are part of the conversationalist problem of alternating turns at talk and that the software is not reliable on this point, not only for identifying the turn as a unit, but also for labelling the automatically reformatted turn, in this case according to the supposed gender of the speaker.

57As Vocapia points out, labeling a voice, a process called speaker recognition, is not the same type of computer work as speech recognition, which consists in transcribing words. Thus, the term voice recognition should be avoided to qualify speech-to-text processing.23 Labeling is a specific task, called speaker diarization, which consists in automatically “structuring the audio stream into speaker turns, and help multimedia indexation” (Tran et al., 2011, p. 28). In the same audio document, it would thus be possible to know all the places where a particular speaker intervenes. As Bredin, Roy et al. state, this task is not concerned with the real identity of a speaker who remains anonymous because it addresses a problem of the clustering of speech turns (2014, p. 9).

58Speaker diarization is therefore not sufficient for speaker identification, and the literature is full of inventive and complementary solutions for this identification. Four modalities are used: speaker diarization based on the recognition of the voice; the spoken person name found in automatic transcriptions; the name written as an overlay on the images; and the heads via face recognition software (Bredin et al., 2012, pp. 386-387). Combining these four modalities is complex, as each has its own temporality (p. 390), as the name spoken does not necessarily correspond to the current speaker or the person seen on the screen. Some research thus aims to calculate the probability that a person name spoken refers to the current, the next, or the previous turn (Bredin, Laurent et al., 2014, p. 3). One can well imagine the limits that such a probability can generate when one knows how difficult it is for the software to manage turns.

59Different combinations of these modalities have been tested in the literature. The option that works best statistically is a combination of voice, written name, and face (Bredin et al., 2012). Researchers unanimously point out that a spoken name, which must use speech-to-text transcripts, is the least reliable. This solution is even systematically discarded from the combinations. Some test an alternative solution that always obtains better results, such as automatic excerpting of texts overlaid on images (Poignant et al., 2012), and others use manual transcriptions of the analyzed data in order to integrate spoken person names in combinations with the voice and written names (Bredin, Roy et al., 2014).

60To overcome this problem related to the difficulties of speech-to-text with person names, Roy et al. (2015), on the other hand, propose focusing on speaker roles such as that of a journalist that could be inferred in automatic transcripts from the general semantic environment. They obtain better scores in terms of speaker identification compared to the method of collecting spoken names. This solution leads them to observe two interesting phenomena: vocal singularities, such as hesitations and “uh” or “hum,” which the software removes, are of great importance for speaker identification (p. 1380); the software is designed as if there were only one person who could talk at a time (p. 1385). Thus, they come close to both the problem of the standardization of language and that of conversation, but without noticing their specificities.

61In summary, computer scientists are trying to solve the problem of person names and speaker identification by looking for solutions that are external to speech-to-text, obviously because it is not a type 1 problem that could be solved by simply improving the computer models. We can thus postulate that this problem constitutes a sequential problem for the machine. For instance, Excerpt 1 shows that person names are often uttered during specific locally situated activities: in this case, producing an enumeration (T11) to inform a public. Yet enumerations appear within a given sequential environment. In Excerpt 1, it is a specific preliminary sequence in the context of a media and institutional exchange: introducing the speakers in a debate and indicating in what respect they are going to speak, their institutions and the point of view they are going to defend (Relieu & Brock, 1995, p. 104). Conversely, enumerations do not correspond to the composition of a standard sentence. In general, one can also assume that the use of a person name responds to sequential rules, whether to producing an interpellation or referring to an absent third party; this could explain why they are so poorly handled by the software, in addition to the type 1 reasons mentioned by Poignant et al. (2014).

Conclusion

62Starting from the issues related to transcriptions, this article has tried to think of them as forms of the translation of oral speech in order to understand the consequences of these translations for their uses. The confrontation of two divergent transcription techniques from the same sources has made it possible to highlight the essential consequences of the choices made for the production and analysis of data. Thus, it would be unreasonable to propose an interactive analysis with automatic texts which, precisely, make interaction disappear, by redoubling the automation of language with an autonomization with regard to its enunciation. On the other hand, these automatic transcriptions, which draw a faithful semantic world, have made it possible the research for which this evaluation has been carried out, as the CompaSciences software bases its analysis on semantic units. Moreover, this research team found, as did previous authors, a solution external to speech-to-text to manage the experts’ person names, which are at the core of the “semantic maps” developed: using the metadata of the transcripts to be able to ignore spoken and wrongly transcribed names. In Goody’s words, the point is to stay “aware of the limits and possibilities inherent in the various techniques of the intellect” (1979, p. 57).

63This article opened with the necessity of retracing the software’s “networks of mediations” (Latour, 2005, p. 136), in order to avoid a “black box” effect for the speech-to-text users. This is why I have chosen to open this box in order to show the backstage of the software, by means of an analysis of its sources for errors but also through the uncovering of the epistemological model that presides over its conception. It appears that the machine successfully transforms the grammatical rules of a language into sequences of words by means of its language model. On the other hand, it is still unable to capture the “machinery of conversation” produced through an embodied set of methods that allow human beings to vary the “sequential environment” they co-produce by adjusting to each other in situation. These are two diametrically opposed models of communication. For one, conversation is a disruptive element regarding the written standardized rules of a language. For the other, conversation is seen as the elementary system of turn-taking on which all other more constrained systems of exchange are based (Sacks et al., 1974, p. 729). Thus, it is surely no coincidence that conversation analysis was developed from the outset on the basis of telephone conversations and that these are, conversely, the pet peeve of computer scientists working with speech-to-text.

64As we have seen, animated conversational sections, which are sometimes demanding to transcribe, were nevertheless accessible to the human listener. However, Vocapia claims that the difficulties encountered by the software with regard to certain “style of speech” also pose problems for “a human being”: “Humans are used to understanding speech, not to transcribing it, and only speech that is well formulated can be transcribed without ambiguity”.24 In a sense, it is correct to say that some transcription skills are acquired through practice. On the other hand, this article shows that there is a real asymmetry between software skills and human skills with respect to language. This asymmetry stems from the opposition between epistemological and praxeological models. This is true for conversational phenomena as well as for vocal singularities which, for the vast majority of them, do not pose any problem to human perception, whether it is a question of understanding or transcribing. This asymmetry is all the more true as humans can train themselves and thus get used to transcribing certain complex phenomena. The machine can also be trained, but, as we have seen, its learning and understanding remain restricted to the limits of its computer models, of an epistemological nature, and conversational habit constitutes, at best, a distant horizon for it.

65Therefore, although the questioning of this article has taken Latour’s reflections as its thread, I propose to end with a conclusion against Latour’s symmetrical anthropology and his anti-phenomenology. Indeed, Latour defines his research program as incompatible with phenomenology “because of the excessive stress given by phenomenologists to the human sources of agency” (2005, p. 61). However, Latour seems to make a categorical error in his stance. Fundamentally, this is not a question of action, and methodologically, the software’s work of transcription has indeed been equated with the social scientist’s act of transcribing. What is at stake is rather a question of perception, which Latour rejects by inciting us to drop “all this opposition between ‘standpoint’ and ‘view from nowhere’” (p. 145). To get rid of the perception and, consequently, of the perspective, it is the price to pay to put objects at the same ontological level as human beings. Objects do not have eyes, nor the sense(s) of interaction, and neither do software. This does not mean that they have no role to play as an “actant” for scientific work, on the contrary, this article accounts for it.

66The analyses offered in this article have been realized from a human perception that is able to carry out the technical de-folding (dé-pliage technique) that Latour calls for. One may retort that this is a particular sociological point of view that is not that of computer scientists, who are able to grasp the software’s standpoint. Nevertheless, they are the ones who implement their point of view in that of the machine from technical possibilities and who suspend their competences of ordinary conversationalists, thinking for the software, on the basis of an epistemological conception of communication. In this sense, this suspension can be seen as an action of the software on humans, but it remains a suspension of competences that are not interchangeable with those of the machine. Moreover, these same member skills are recovered by computer scientists when they make the necessary corrections to their software. They say it themselves: the only way to be sure that the errors are indeed due to ambiguities inherent in the speech signal and not to inaccurate modelling is to compare the recognition of errors by the software with that performed by humans (Lamel et al., 2011, p. 128).

67Finally, in an ironic way, it is possible to conclude with Latour - and against him:

The technical object is opaque and, to be honest, incomprehensible because it can only be understood if we add to it the invisibles that first make it exist, then maintain it, support it, and sometimes ignore and abandon it. […] Without the invisibles, no object would hold together, and especially no automaton would achieve this prodigy of automation. […] we always omit to add to the technical objects what establishes them under the pretext, which is also true, that they stand alone once launched, except that they can never remain alone and without care (2010, p. 26).

68The question to ask is who adds these invisibles, who sees them and who takes care of them? We know that this is a question that calls for a phenomenological answer that has to do with perception. The perception of a being in flesh and blood who, failing to see the interaction reappear, sees it disappear in the course of these transformations by the technical object.

This article would not have been possible without the financial support of the Institute for Media Innovation, nor without stimulating exchanges with Boris Beaude, Agathe Chevalier, Matthieu Devaux and Ulrich Fischer in the framework of the CompaSciences project. I would particularly like to thank Philippe Gonzalez, also a member of this project, who contributed to making this text more forceful thanks to his advice and meticulous proofreading of the original version. Finally, I would like to thank the anonymous reviewers of the RAC who helped me to clarify and enhance my argument. The final result is my responsibility alone.

Haut de page

Bibliographie

Alquier, E., Carrive, J. & Lalande, S. (2017). Production documentaire et usages. L’automatisation dans les outils de consultation et de documentation de l’Institut national de l’audiovisuel (Ina). Document numérique, 2(20), 115-136.

Barras, C., Le, V.-B. & Gauvain, J.-L. (2020). Vocapia-LIMSI System for 2020 Shared Task on Code-switched Spoken Languag Identification. The First Workshop on Speech Technologies for Code-Switching in Multilingual Communities, 1-5.

Bourdieu, P. (1996). Juin 1991, Ahmed X. Revue de littérature générale (P.O.L), 96/2 digest, n.p.

Bredin, H., Poignant, J., Tapaswi, M., Fortier, G., Le, V. B. et al. (2012). Fusion of speech, faces and text for person identification in TV broadcast. In V. Murino & R. Cucchiara (dir.). ECCV-12th European Conference on Computer Vision (pp. 385-394). Berlin: Springer.

Bredin, H., Laurent, A., Sarkar, A., Le, V.-B., Rosset, S. et al. (2014). Person Instance Graphs for Named Speaker Identification in TV Broadcast. Odyssey – The Speaker and Language Recognition Workshop, 1-8.

Bredin, H., Roy, A., Le, V.-B. & Barras, C. (2014). Person instance graphs for mono-, cross- and multi-modal person recognition in multimedia data: application to speaker identification in TV broadcast. International Journal of Multimedia Information Retrieval, 3(3), 161-175.

Clavel, C., Adda, G., Cailliau, F., Garnier-Rizet, M., Cavet, A. et al. (2013). Spontaneous speech and opinion detection: mining call-centre transcripts. Language Resources and Evaluation, (47), 1089-1125.

Despres, J., Lamel, L., Gauvain, J.-L., Vieru, B. et al. (2013). The Vocapia Research ASR Systems for Evalita 2011. In B. Magnini, F. Cutugno, M. Falcone & E. Pianta (dir.). Evaluation of Natural Language and Speech Tools for Italian (pp. 286-294). Berlin: Springer.

Esteban, C. (2020). Construire la « compréhension » d’une machine. Une ethnographie de la conception de deux chatbots commerciaux. Réseaux, 2(220-1), 195-222.

Fraga-Silva, T., Gauvain, J.-L., Lamel, L., Laurent, A. et al. (2015). Active Learning data selection for limited resource STT and KWS. Interspeech, (15), 3159-3163.

Gauvain, J., Lamel, L., Le, V. B., Despres, J. et al. (2019). Challenges in Audio Processing of Terrorist-Related Data. In I. Kompatsiaris, B. Huet, V. Mezaris et al. (dir.). MultiMedia Modeling Part II (pp. 80-92). Berlin: Springer.

Gelly, G., Gauvain, J. L., Lamel, L., Laurent, A. et al. (2016). Language Recognition for Dialects and Closely Related Languages. Odyssey – The Speaker and Language Recognition Workshop, 124-131.

Goody, J. (1979). La Raison graphique. La domestication de la pensée sauvage. Paris : Éditions de Minuit.

Jefferson, G. (2004). Glossary of transcript symbols with an introduction. In G. H. Lerner (dir.). Conversation Analysis: Studies from the First Generation (pp. 13-31). Amsterdam/Philadelphia: Benjamins.

Lamel, L., Courcinous, S., Depres, J., Gauvain, J.-L. et al.. (2011). Speech Recognition for machine translation in quareo. International Workshop on Spoken Language Translation, 121-128.

Latour, B. (2005). Reassembling the social: An introduction to Actor Network Theory. Oxford: Oxford University Press.

Latour, B. (2010). Prendre le pli des techniques. Réseaux, 5(163), 11-31.

Mondada, L. (2007). Enjeux des corpus d’oral en interaction : re-temporaliser et re-situer le langage. Langage et société, 3(121-122), 143-160.

Mondada, L. (2008). Contributions de la linguistique interactionnelle. Dans J. Durand, B. Habert & B. Lacks (dir.), Congrès Mondial de Linguistique Française. Discours, pragmatique et interaction (pp. 881-897). Paris : Institut de Linguistique Française.

Poignant, J., Bredin, H., Le, V.-B., Besacier, L., Barras, C. et al. (2012). Unsupervised Speaker Identification using Overlaid Texts in TV Broadcast. Interspeech – Conference of the International Speech Communication Association, 1-4.

Poignant, J., Besacier, L. & Quénot, G. (2014). Nommage non supervisé des personnes dans les émissions de télévision. Utilisation des noms écrits, des noms prononcés ou des deux ? Document numérique, 1(17), 37-60.

Quéré, L. (1991). D’un modèle épistémologique de la communication à un modèle praxéologique. Réseaux, 9(46-7), 69-90.

Rioufreyt, T. (2018). La transcription outillée en SHS. Un panorama des logiciels de transcription audio/vidéo. Bulletin de Méthodologie Sociologique, (139), 96-133.

Relieu, M. & Brock, F. (1995). L’infrastructure conversationnelle de la parole publique. Analyse des réunions politiques et des interviews télédiffusées. Politix, 8(31), 77-112.

Roy, A., Bredin, H., Hartmann, W. et al. (2015). Lexical speaker identification in TV shows. Multimedia Tools and Applications, (74), 1377-1396.

Sacks, H., Schegloff, E. A. & Jefferson, G. (1974). A Simplest Systematics for the Organization of Turn-Taking for Conversation. Language, 4(50), 696-735.

Silberztein, M. (2019). Les outils informatiques au service des linguistes : présentation. Langue française, 3(203), 7-14.

Tancoigne, E., Corbellini, J.-P., Deletraz, G., Gayraud, L. Ollinger, S. et al. (2020). La transcription automatique : un rêve enfin accessible ? Analyse et comparaison d’outils pour les SHS. Nouvelle méthodologie et résultats. [Rapport de recherche] MATE-SHS. Halshs-02917916v2

Tran, V-A., Le, V. B., Barras, C. & Lamel, L. (2011). Comparing Multi-Stage Approaches for Cross-Show Speaker Diarization. Interspeech – 12th Annual Conference of the International Speech Communication Association, 27-31.

Vasilescu, I., Vieru, B. & Lamel, L. (2014). Exploring Pronunciation Variants for Romanian Speech-to-text Transcription. Spoken Language Technologies for Under-Resourced Languages, 161-168.

Wagner, J., Fraga-Silva, T., Josse, Y. & Schiller, D. (2017). Infected Phonemes: How a Cold Impairs Speech on a Phonetic Level. Interspeech, 3457-3461.

Haut de page

Annexe

Transcript conventions CA

Transcript conventions CA

Illustration credit: Marine Kneubühler

Haut de page

Notes

1 This research, entitled Scientific Expertise and Media Discourse, was funded by the Initiative for Media Innovation (IMI) in 2020 (https://www.media-initiative.ch/project/scientific-expertise-and-media-discourse/) and is currently being pursued in a second phase (https://www.media-initiative.ch/project/compasciences-2-0/). Avis d'Experts is a Website that gathers all the broadcasts in which experts from universities and schools of higher education in French-speaking Switzerland appear on RTS: https://avisdexperts.ch/.

2 All translations from French are mine.

3 https://www.vocapia.com/voxsigma-speech-to-text.html

4 Latour uses the term “technical folding” (pliage technique) to refer to the mode of existence of the technical object, which is made up of mediations (“trick”, “material differential”, “resistance”) rendered invisible by its proper functioning (2010, p. 29).

5 https://www.vocapia.com/glossary.html

6 This metric was also used and discussed in the report by Tancoigne et al. (2020), which accounts for an extensive evaluation of eight automatic transcription software packages for research in the human and social sciences, including the Vocapia software. This one stands out from the others as being one of the best performing, especially for “processing spontaneous speech” (p. 73).

7 Vocapia has a web page giving access to the PDFs of many articles referring to the state of the art on speech-to-text. Most of the computational literature cited here comes from this page: https://www.vocapia.com/publis.html. This abundance of open access information indicates that Vocapia considers itself to be a transparent provider of the “translation chains” that make up its software. However, the computer science behind these chains requires an enormous amount of research to be properly understood and retraced.

8 Technical definitions that do not refer to a specific text in the bibliography or to another Internet link refer to definitions from the Vocapia online glossary: https://www.vocapia.com/glossary.html.

9 https://www.vocapia.com/speech-to-text.html

10 The T refers to the table, and the number refers to the source line of the error addressed in the demonstration.

11 https://www.vocapia.com/speech-to-text-technology.html

12 https://www.vocapia.com/faqs.html

13 This standardization was one of Bourdieu's concerns in his Misère du monde. He thus chose not to reproduce the “extreme difficulty of expressing oneself” certain interviewees faced by erasing the particularities of their pronunciation or reducing silences. In a note following the publication of the book, he argues, “The literal transcription risks being unintelligible, in any case not very ‘literary’ (whereas I would like these testimonies to be read with the attention one gives to ‘literary’ things [...])” (1996).

14 https://www.vocapia.com/faqs.html

15 https://www.vocapia.com/faqs.html

16 This “inseparability between primary and secondary data” (Mondada, 2007, p. 145) does not imply a rejection of transcription software as a work assistant. On the contrary, the proponents of CA often use software such as CLAN, ELAN, Praat, or ANVIL which make it possible to align the text with its source and to materialize this indissociability through technology (p. 145). No software was used for the transcriptions made here.

17 Climat: la grande hypocrisie ?” (Climate: the great hypocrisy?), show title: Infrarouge, December 19, 2018: https://www.rts.ch/play/tv/redirect/detail/10083638

18 For cross-references in the analyses, STT or CA designate the type of transcription and the number refers to the corresponding line. In the STT versions, MS and FS correspond to the identification of a male or female voice and the number that follows is assigned to a given speaker. This assignment is often incorrect because the software tends to identify more speakers than is actually the case. The shades of gray highlight the “hesitations” of the software: the lighter colors indicate a proposition calculated as implausible. These shades are not very useful. It often happens that the software “doubts” when the proposition is correct (in Excerpt 1 "industriel" (industrialist STT18; CA23) is right). The software can be “confident” when it is wrong, as well (“Unédic” (STT10; CA13) is a mistake). The conventions used for CA transcriptions can be found at the end of the article and are taken from Jefferson’s system (2004). A translation of the sources is provided below the French for both versions so that English-speaking readers can get an idea of the errors made by the software. This exercise is necessarily imperfect, as the language of analysis is French. As far as possible, the parts well transcribed by the STT in French are kept in both English versions according to what would have been said hypothetically in the context of the shows. A word-for-word translation is provided for erroneous terms that have been substituted in the STT French version; I thus focused on highlighting the changes of meaning when the software proposed homonyms of two different French words that are sometimes pronounced very differently in English. For example, “on en pêche” (we fish for it) and “on empêche” (we prevent) are pronounced the same in French. I would like to sincerely thank Agathe Chevalier, who helped me with the tricky translation of these transcripts.

19 A “multimodal transcription” (Mondada, 2007, p. 151) would show that the guest also uses his gaze and the phenomenon of pointing to signify the graph’s absence.

20 "La saison de la pêche" (The Fishing Season), show title: L’oreille des Kids (The Kids’ ear), May 11, 2016: https://www.rts.ch/play/tv/redirect/detail/7716038

21 VoxSigma is nevertheless considered to be efficient in transcribing “spontaneous speech” (Tancoigne et al., 2020, p. 73). However, the evaluation of this report does not consider the transcriptions of data excerpted from an associative meeting, which constituted a “real challenge” for all software (p. 71) and was thus systematically rejected.

22 Thus, in the rules listed by Sacks et al. (1974) several have this form: “...is not fixed but varies” (p. 701).

23 https://www.vocapia.com/speech-to-text.html

24 https://www.vocapia.com/speech-to-text.html

Haut de page

Table des illustrations

Titre Figure 1: Excerpt 1 STT
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-1.png
Fichier image/png, 391k
Titre Figure 2: Excerpt 1 CA
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-2.png
Fichier image/png, 339k
Titre Figure 3: Excerpt 2 STT
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-3.png
Fichier image/png, 302k
Titre Figure 4: Excerpt 2 CA
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-4.png
Fichier image/png, 564k
Titre Figure 5: Excerpt 3 STT
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-5.png
Fichier image/png, 603k
Titre Figure 6: Excerpt 3 CA
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-6.png
Fichier image/png, 509k
Titre Figure 7: Excerpt 4 STT
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-7.png
Fichier image/png, 459k
Titre Figure 8: Excerpt 4 CA
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-8.png
Fichier image/png, 102k
Titre Transcript conventions CA
Crédits Illustration credit: Marine Kneubühler
URL http://journals.openedition.org/rac/docannexe/image/27984/img-9.png
Fichier image/png, 45k
Haut de page

Pour citer cet article

Référence électronique

Marine Kneubühler, « What happens to semantics and interaction in a software’s automatic transcriptions?  »Revue d’anthropologie des connaissances [En ligne], 16-2 | 2022, mis en ligne le 01 juin 2022, consulté le 24 avril 2025. URL : http://journals.openedition.org/rac/27984 ; DOI : https://doi.org/10.4000/rac.27984

Haut de page

Auteur

Marine Kneubühler

Research fellow at the University of Lausanne. In her work, she uses and develops various tools in qualitative methodology and is closely interested in writing as a technique and mediation in the transformation of research as well as of individual and collective experience. Her questions extend to the consequences of technical devices for perception, the body and the constitution of collectives.
ORCID : https://orcid.org/0000-0002-7791-2687

Address: Institut des sciences sociales – Laboratoire THEMA, Faculté des sciences sociales et politiques, Université de Lausanne, Bâtiment Géopolis-5538, CH-1015 Lausanne (Suisse)
Email: marine.kneubuhler[at]unil.ch

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search