Navigation – Plan du site

AccueilNuméros24A multimodal corpus to study vide...

A multimodal corpus to study videoconference interactions for techno-pedagogical competence in second language acquisition and teacher education

Un corpus multimodal pour étudier les interactions par visioconférence pour le développement des compétences techno-pédagogiques en didactique des langues et formation de formateurs
Marco Cappellini, Benjamin Holt, Brigitte Bigi, Marion Tellier et Christelle Zielinski

Résumés

Notre article vise à décrire la constitution et l’annotation d’un corpus multimodal et multilingue (français, anglais et chinois mandarin) pour l’étude de l’acquisition d’une langue seconde et du développement professionnel en enseignement des langues. Le corpus a été construit dans le cadre d’un projet de recherche dont l’objectif principal est de déterminer quelles compétences techno-sémio-pédagogiques peuvent être développées de manière informelle et lesquelles nécessitent une formation formelle. Nous expliquons le cadre théorique adopté, caractérisé par une approche écologique de l’environnement interactif. Nous illustrons ensuite les procédures de collecte de données audio, vidéo et oculométriques. Nous détaillons également l’annotation des données brutes pour produire un corpus d’analyse s’appuyant sur différents outils d’annotation automatiques et semi-automatiques. Enfin, nous expliquons comment un tel corpus nous permet d’étudier le développement de la compétence multimodale des apprenants et de la compétence techno-pédagogique des tuteurs et pointons vers d’autres questions de recherche possibles.

Haut de page

Texte intégral

Introduction

1The combination of highly developed desktop videoconferencing (DVC) software and fast Internet speeds has led to the emergence of innovative pedagogical practices in foreign language education. DVC, with its widespread integration into pedagogical environments since the mid-2000s, has become a common tool for Computer-Assisted Language Learning (CALL) (O’Dowd & O’Rourke 2019). In these pedagogical environments, often named “telecollaboration” or “virtual exchange,” students in different countries communicate and collaborate with each other online. This article aims to describe the inception and construction of a multilingual multimodal corpus in order to explore some of the current open questions in the field of teacher training and CALL.

2Virtual exchange programs come in a variety of forms, and pedagogical models can vary. The most widespread is the e- or teletandem model, in which pairs of students of different mother tongues interact in both languages in order to learn each other’s language. The development of teaching skills is not the focus of this model, and teacher training is therefore not included in the program. In another model, based on the Français en (Première) Ligne (F1L) project, student teachers (henceforth tutors) develop and implement learning activities with distant learners via DVC platforms. Combined with formal teacher training, the aim is to foster the development of techno-semio-pedagogical (TSP) competence (Guichon 2012, defined below).

3Within this context, research has been carried out on teacher training, both to identify which skills are necessary for online tutoring, and to implement pedagogical programs that offer hands-on experience to tutors. In this article, we first explain the objectives, theoretical framework and research questions that led to the first steps in the construction of our corpus. We also distinguish our corpus from existing corpora and explain how it offers researchers the possibility to explore issues related to teacher training. Second, we present a detailed account of the data collection procedures so that interested researchers can replicate our endeavor. Third, we explain the data annotation process that resulted in the VAPVISIO corpus (LPL, 2021). Fourth, we discuss research questions that the VAPVISIO corpus can help to explore.

1. Objectives and theoretical framework of the VAPVISIO project

  • 1 Vers une approche comparative de l’apprentissage/enseignement des langues étrangères par visioconfé (...)

4The VAPVISIO1 project aims to promote advances in two related fields: the modeling of virtual exchanges, and the elaboration of innovative research methodologies. Our project features two purpose-built models of virtual exchange, based on the teletandem and the F1L models. Our innovative methodological approach combines the benefits of in-depth case studies and of larger-scale analyses of learning dynamics, allowing for comparisons across learning environments. The main objective of the project is to study the emergence of techno-semio-pedagogical competence in different pedagogical models and languages, in order to understand which skills are developed naturally through practice and which ones require formal training.

1.1 Definition of techno-semio-pedagogical competence

  • 2 Due to lack of space, in this article we will not be able to expand on the semiotic dimension of DV (...)

5The literature presents several models of pedagogical competence related to teaching languages with Information and Communication Technologies (ICTs) (Dooly 2010, Kessler 2016 among others). One of the most widespread models for online language teaching is Hampel & Stickler’s pyramid model (2005, 2015), which identifies basic ICT skills before moving up to more individualized and creative ones. However, despite a study dedicated to DVC interaction (Hampel & Stickler, 2012), this model is neither specifically conceived for, nor largely adapted to DVC Computer-Mediated Communication (CMC). On the contrary, Guichon’s model of techno-semio-pedagogical competence (2012) was initially designed to describe the integration of ICTs into language teaching in general, and was subsequently adapted on a large scale to teaching through DVC (Guichon & Tellier 2017). This model defines techno-semio2-pedagogical competencies as “knowledge and skills about:

  • communication tools available (forums, wikis, videoconferencing, etc.) that are most suitable for the objectives of a pedagogical sequence;

  • taking into account the appropriate modes (written, oral, video, or a combination thereof) for a given activity and for the development of linguistic competencies;

    • 3 Our translation.

    the pedagogical management of learning activities with and related to CMC tools (planification, regulations during task accomplishment, learning assessments)”3 (Guichon 2012: 187).

  • 4 http://icar.univ-lyon2.fr/projets/ismael/corpus.html

6This general definition was subsequently adapted to the specific context of teaching French as a Foreign Language through DVC in the F1L-based ISMAEL4 telecollaborative project (Guichon & Tellier 2017). This made it possible to operationalize the model by identifying and defining a set of specific recommendations and actions that online teachers should be able to accomplish, and that should therefore be included in their training.

1.2 Literature review and limitations

7A review of the existing literature (Cappellini 2020) has shown that among the studies on teacher education in CALL and telecollaboration, very few are based on analysis of actual pedagogical practices as opposed to relying on the perceptions of tutors. Within this small body of research, pedagogical and digital skills are observed at specific moments and not longitudinally, as is the case with the studies collected in Guichon & Tellier (2017) based on the ISMAEL corpus. Longitudinal corpora of DVC CMC do exist, such as the one developed within the Teletandem Brasil project (Aranha & Wigham 2020), but these do not focus specifically on teacher training and on the development of online language teaching skills. Moreover, most of these projects and related studies focus on only one telecollaboration model, which precludes a comparative approach that would enable researchers to determine which skills require formal training.

8The second gap in the literature that the VAPVISIO project aims to fill concerns the differences between definitions of techno-semio-pedagogical competence and the methodological tools used to observe its emergence. In fact, most of the authors agree that this competence includes not only the ability to effectively use relevant modes and strategies for communication (and possibly teaching), but also knowledge (see Guichon’s definition above) and awareness (Hauck 2010) of the semiotic modes available. This knowledge and awareness has been studied only through retrospective introspection, especially through learning logs (i.e. Fuchs et al. 2012) and less commonly through stimulated recall (Cohen 2017), but never within interaction itself.

9The VAPVISIO project aims to fill these gaps by directly observing, through the use of eye-tracking technology and multimodal conversation analysis (Cappellini 2021), the ways in which modes are selected during DVC-based pedagogical interaction. This approach allows us to study the unfolding of tutors’ TSP competence during interaction itself, offering compelling insight into the interlocutors’ cognition, for instance in terms of joint attention (Cappellini & Hsu 2022, Shi & Stickler 2022).

1.3 Scientific objective and hypotheses

10The main objective of the VAPVISIO project is to understand, in the context of telecollaborative projects, which TSP competencies require formal training. Our corpus and methodological framework allow us to directly compare two pedagogical models: the teletandem model and the F1L-based model (Cappellini & Azaoui 2017). We posit that since the teletandem model lacks any formal guidance on TSP or multimodal competencies, any such competencies that do emerge will do so naturally through experience. In contrast, due to the fact that formal teacher training is part of the F1L-based telecollaboration model, we expect tutors to develop higher levels of TSP competence. This comparison will therefore allow us to identify the TSP competencies that require formal training. We test two hypotheses concerning the evolution of TSP competence over time:

  1. Participants’ TSP competencies are superior at the end of a telecollaborative project than at the beginning.

  2. Tutors’ TSP competencies at the end of an F1L project are superior to those of language learners at the end of a teletandem project.

2. Data collection

11In this section we first describe the four telecollaborative projects that comprise the VAPVISIO project. Then, after addressing ethical considerations, we describe the collection of audio, video and eye-tracking data.

2.1 Pedagogical scenario

  • 5 The pedagogical activities suggested to the learners for interaction are available at https://amubo (...)

12The VAPVISIO corpus is made up of 41 participants (tutors and student language learners), divided into four telecollaborative settings: two teletandem-based virtual exchanges and two F1L-based virtual exchanges. The two teletandem exchanges involved language learners from Aix-Marseille University and Arizona State University for French and English, and learners from Aix-Marseille University and the Shenzhen Foreign Language University (深圳外国语学院) for French and Mandrin Chinese. All of the learners were enrolled at the undergraduate level in their universities and had different disciplines, mainly language-related. As for the F1L-based telecollaborative projects, the first one paired post-graduate tutors enrolled in a master’s degree program of teaching French as a foreign language at Aix-Marseille University with undergraduate learners of French at the University of California Berkeley. The second project paired post-graduate tutors enrolled in a master’s degree program of teaching Mandarin Chinese as a foreign language at the Hong Kong Polytechnic University (香港理工大学) with undergraduate learners of Chinese at Aix-Marseille University. All participants in the four telecollaborative projects were in their twenties and had a proficiency level in their foreign language between B1 and B2 according to the Common European Framework of Reference for Languages scale (Council of Europe 2001). The participants’ native languages largely corresponded to the geographic location of their institutions. We note the following exceptions: an Iranian learner at Arizona State University, a Mexican learner at AMU in the AMU-Shenzhen teletandem project, and three tutors at AMU for the AMU-UCB project from Brazil, Colombia and Russia. All of them had near-native proficiency in the language of their institution. Data collection took place during the Spring semester of 2019.5

Table 1. Composition of groups for data collection

French-(English)

Chinese-(French)

Teletandem setting

4 pairs Aix-Marseille University—Arizona State University

5 pairs Shenzhen University—Aix-Marseille University

F1L setting

5 groups Aix-Marseille University—University California Berkeley

5 groups Hong Kong Polytechnic University—Aix-Marseille University

2.2 Ethical issues and consent

13All of the students provided informed consent before data were collected, and participation was voluntary. In order to obtain the necessary authorizations, ethical issues were dealt with according to the regulations of the country or university involved. For instance, data collection in France followed the guidelines of the Laboratoire Parole & Langage and the French participants gave written consent to be recorded. All participants were given an explanation of the study and its procedures, and provided informed consent prior to its commencement. A request was also filed with the CNRS’s Data Protection Officer (certificate number 2-20082). After data collection, recordings were anonymized by producing white noise when family names were uttered, or by blurring videos when personal information was displayed on the screen (such as participants showing their social media profiles on mobile devices). All faces were kept visible in order to study facial expressions.

2.3 Audio-visual modalities

14During the interactions, the French participants sat in front of an external monitor that had a Tobii eye-tracking device attached to the bottom. They used an external keyboard and mouse which were connected, along with the external monitor and eye-tracker, to a laptop computer that ran the DVC and eye-tracking software. A separate desktop computer equipped with an external soundcard was used for audio recording. An external camera was positioned on a table in order to film what was happening outside the webcam’s field of view. In the following paragraphs, we describe our methods of collecting audio, video and eye-tracking data.

Audio recording

15The audio recording devices and their settings (microphone, bitrate, file format, software), as well as the environment, have a determining effect on the quality of the corpus and on subsequent annotations. We therefore strove to respect the following guidelines:

  1. One audio channel per speaker, i.e., one microphone per speaker;

  2. The use of a professional-grade head-worn cardioid microphone with a maximum audio frequency bandwidth;

  3. The use of an anechoic chamber or low-noise environment;

  4. The use of uncompressed file formats, commonly Waveform (WAV);

  5. The use of sampling rates of at least 16000 Hz—ideally 48000 Hz, with 16 bits.

  • 6 https://audacity.fr/

16As for point 3, our data were recorded at Aix-Marseille University’s Centre de Formation et Autoformation en Langues in a dedicated room. We complied with point 4 by using the audio recording program Audacity6, and with points 2 and 5 by using a Roland Rubix 22 audio interface linked to an AKG C520 headset microphone. Since it is difficult to obtain two separate voice channels in DVC settings, we implemented an innovative solution to address point 1. The local interlocutors wore the aforementioned headset microphone, and with the help of the desktop computer’s external soundcard, the incoming and outgoing audio streams were split. The microphone’s output was split and directed in parallel to the laptop computer used for videoconferencing and to the desktop computer for recording. The distant interlocutor’s incoming audio stream was also split and directed in parallel to the local interlocutor’s headset and to the desktop computer for recording. It must be noted that the recording of the distant interlocutor’s voice was mixed with any computer-generated sounds produced by the local laptop.

17Audio recordings come from three sources. Aside from the procedure described above, audio recordings were retrieved from the dynamic screen captures produced by Tobii Studio on the laptop and from the external camera. The audio tracks from the Tobii video exports and from the external camera were used only to synchronize the different video recordings, and were ultimately discarded.7

Video recording

18Visual data come from two main sources: dynamic screen recordings produced by the Tobii Studio software8, and video recordings from an external camera. The external camera was positioned to capture the interlocutor’s torso, hands and face, allowing us to record all hand gestures produced, including those that were not visible to the webcam.

Data export and synchronization

19After each recording, video and audio files were exported using Tobii Studio on the laptop, and Audacity on the desktop. Using the Tobii Studio program, two video files per interaction were exported, one with eye movements and one without, each at 30 frames per second. Eye movements were represented by red dots and lines superimposed onto the video, corresponding to fixations and saccades (see section 2.3). These two video files and the one from the external camera were subsequently compressed into MP4 video files using the H264 video codec in VSDC free video editor in order to reduce their size and to make them usable in Adobe Premiere Pro and in ELAN (see section 3.3). The external soundcard allowed the desktop computer to record a stereo audio file using Audacity, with one interlocutor’s voice encoded as the left channel and the other as the right. From this stereo sound file, two mono sound files were exported, one for each interlocutor, as WAV files.

20The MP4 video files and the WAV sound files were then synchronized using Adobe Premiere Pro. Five files were exported per interaction: two WAV sound files (one per interlocutor, 44.1 kHz, mono, 16-bit) and three MP4 video files (one screen recording with eye-tracking visualizations, one without, and one video file from the external camera, 1920x1080 pixel resolution, 30 frames per second). The three video files were mixed with the audio-only files, meaning that the final video exports contain the sound of all interlocutors. These five files (three video files and two sound files) are synchronized, meaning that they can be played back together using ELAN (see section 3.3).

2.4 Eye-tracking

21Eye-tracking data were collected in order to estimate the coordinates of the interlocutors’ fixations on the screen. Fixations are characterized by periods of relatively stable gaze, and saccades are the rapid movements in between fixations. By virtue of the eye-mind hypothesis (Conklin et al. 2018), fixations offer insight as to where the interlocutors are directing their attention and expending cognitive energy.

22A Tobii Pro X3-120 eye-tracking device was used to measure the interlocutors’ gaze on the screen at a sampling rate of 120 Hz. An infrared device was fixed to the bottom edge of the Dell external monitor (21.5-inch, LED backlit, 1290x1080 pixel resolution, 60 Hz refresh rate), and was connected to an External Processing Unit (EPU) that was connected via USB to the laptop (Dell Latitude 7490). Participants were instructed to sit within the operating range of 50–90 cm away from the screen.

23The Tobii Studio 3.4.8 software installed on the laptop was used to manage the recording, pre-processing and export of the data. Before each recording, the eye-tracker calibrated itself by displaying a moving red dot on the screen that participants followed with their eyes. Recording continued until the experimenter stopped it at the end of each one-hour session.

24The eye-tracking data from each session were exported as a TSV file, with each row representing the gaze coordinates that were recorded every 8 milliseconds. These data include the timestamp measured as elapsed time in milliseconds from the beginning of the recording, the estimated gaze coordinates relative to the top-left corner of the screen (x, y) in pixels, and the fixation index that specifies whether the data point is a part of a fixation, as determined by the Tobii I-VT fixation filter with standard settings (velocity threshold of 30 degrees per second and a minimum fixation duration of 60 milliseconds). Additional columns indicate the specific region of the screen in which the fixation is located. This will be described in section 3.2 below.

3. Annotations

25It is costly to annotate a wide range of phenomena such as gaze and gesture production. For this reason, partially or fully automatic annotation is currently accepted in the field of computational linguistics. Automatic annotations are usually post-edited manually before further analysis. The following sections describe the different types of annotation used for the VAPVISIO project.

3.1 Semi-automatic annotation of speech

26Drawing on experience gained from the CID - Corpus of Conversational Data (Blache et al. 2010), we adapted a multi-layered annotation scheme for the VAPVISIO corpus. In the following paragraphs, we describe the procedures and tools used for speech annotation.

27The first step consisted of determining the Inter-Pausal Units (IPUs). This is performed on each audio file, corresponding to the speech of each interlocutor. SPPAS (Bigi 2015) was used to divide the audio channels into segments of speech and silence, because it allows the analyst to automatically set the threshold volume as well as other parameters. The minimum duration for an annotation to be generated was fixed at 100 milliseconds, with the minimum duration of a silence set at 200 milliseconds.

28Next, we manually checked the IPUs (Bigi & Priego-Valverde 2018). This consisted of adding, deleting, merging or splitting IPUs, and adjusting boundaries. These actions can be performed with most annotation tools featuring a timeline, such as ELAN (Sloetjes & Wittenburg 2008). However, for speech segmentation at various phonetic levels (see section 3.3), the IPU boundaries must be carefully examined and corrected. We therefore used software dedicated to speech analysis in phonetics, such as Praat (Boersma & Weenink 2001).

  • 9 Available at https://www.ortolang.fr /market/corpora/sldr000873.

29The third and final step was orthographic transcription. Depending on the transcriber’s preference, transcription was carried out in Praat or in ELAN. For our corpus, we adopted the SPPAS transcription convention9 for French and English. For Mandarin Chinese, we followed the convention elaborated in Cappellini (2014). These conventions include filled pauses, short pauses, truncated words, repetitions, noises, laughs, and more. The time required to transcribe and align the speech with the sound varies according to the language (typing in simplified Chinese is different from typing in Western languages that use the Latin alphabet) and the expertise of the transcriber. The 8 transcribers who worked on the VAPVISIO corpus took an average of 8 to 15 hours to transcribe one hour of DVC interaction.

3.2 Semi-automatic annotation of gaze

30As described above, the pre-processed eye-tracking data exported from Tobii Studio contained information about the location of the participant’s gaze on the screen at any given time. Areas of interest (AOIs) make it possible to determine at which moments the participant is looking at a specific, predetermined area of the screen. AOIs can be manually defined within the software by placing rectangles around different areas of the screen. This works well for fixed elements such as chat windows, webcam images, web browsers, and open documents. However, it is more challenging to define AOIs for onscreen elements whose position and size vary substantially, such as the distant partner’s face. For this we used software called OpenFace 2.2.0 (academic license) (Baltrušaitis et al. 2018).

Semi-automatic creation of AOIs for interlocutors’ faces

31OpenFace is an open-source tool for facial behavior analysis. It includes facial landmark detection, head pose estimation, facial action unit recognition and gaze direction estimation. First, the face is detected by the Multi-Task Convolutional Neural Network (MTCNN) face detector (Zhang et al. 2016). Then, facial landmark detection is performed using the Convolutional Experts Constrained Local Model (CE-CLM) algorithm (Zadeh et al. 2017). Markers are automatically placed on the contours of the face, eyebrows, eyes, nose, and mouth.

32The FaceLandmarkVidMulti executable program was used to analyze the dynamic screen recording. The program is able to detect multiple faces on the screen, such as the distant interlocutor’s face, the local interlocutor’s own webcam image, and any faces on open web pages. A CSV file is created, containing the pixel coordinates (x, y) of 68 facial landmarks detected for each video frame and for each face, with the associated confidence levels.

33Using this output file, some post-processing was performed in Matlab in order to define the AOI of the distant partner’s face. First, using the coordinates of the facial markers, a rectangle was defined in order to delimit the face. Horizontal margins of 25 pixels were added to the left and rightmost coordinates, as well as vertical margins of 25 pixels under the chin and 50 pixels above the eyebrows. This was done in order to compensate for any minor inaccuracies in the gaze coordinates calculated by the eye-tracker. Second, in order to select the distant interlocutor’s face and not the local interlocutor’s own webcam image, we set the minimum expected facial dimensions to 150x150 pixels and kept only the largest face. Third, for annotation purposes, a video of the screen recording was generated with the superimposed AOI of the distant interlocutor’s face as well as fixation points.

Fixation annotations

34Each AOI corresponds to a tier with a specific label, for instance “fixaoi_openface_main” for the distant interlocutor’s face detected by OpenFace, “fixaoi_instant_msg” for a chat window, or “fixaoi_web_page” for an open browser window. Each fixation inside an AOI is annotated with the duration of the fixation. The resulting annotation file is formatted as a tab-separated TXT file containing one row per annotation with the following elements:

  • tier name (AOI label);

  • start time;

  • end time;

  • duration;

  • mean confidence level of the gaze coordinates from the eye-tracker;

  • mean confidence level of the of the face detection from OpenFace.

35These annotations were subsequently imported into ELAN. The following screen capture shows the AOIs for the OpenFace detection, the instant message panel and the web browser window. The tiers at the bottom of the screen show the fixation occurrences, labelled as durations in milliseconds, in the different AOIs.

Figure 1. Screen capture of ELAN

Figure 1. Screen capture of ELAN

3.3 Multi-layered annotation

36ELAN was used to gather and simultaneously visualize the multiple annotations described above, each with its own tier. The following tiers were used for subsequent analysis:

  • Speech transcription (local interlocutor);

  • Speech transcription (distant interlocutor);

  • Text chat from the local interlocutor (transcribed following Cappellini, 2014);

  • Text chat from the distant interlocutor;

  • Fixations on the distant interlocutor’s face;

  • Fixations on the local interlocutor’s own webcam image;

  • Fixations on the chat window.

37In case of a second distant interlocutor, two additional tiers were created for oral and written speech.

4. Discussion

38The corpus that we have collected consists of 77 DVC sessions totaling 64 hours and 47 minutes. As of writing, 43 sessions totaling 39 hours and 18 minutes have been transcribed. Moreover, for the F1L-based exchange between AMU and UCB, the corpus includes 6 fully transcribed stimulated recall sessions of roughly 45 minutes each.

39Our corpus is the basis for ongoing analysis. As specified above, the main objective of the project is to study the development of language learners’ multimodal competence during teletandem interaction and tutors’ TSP competence during F1L-based interaction. We implement a three-step process to answer our research questions. First, in order to detect any development of multimodal and/or TSP competence within each telecollaborative setting, we search for differences between the first and last DVC sessions of each group. Second, in order to compare the two telecollaborative settings, we analyze variations between the teletandem and F1L-based groups, distinguishing between languages (French, English or Mandarin Chinese). A close examination of the differences will enable us to determine which of these competencies require formal training. Due to the formal training involved in the F1L-based projects, we posit that these tutors will develop broader skills than will the learners in the teletandem settings. This hypothesis has been partially confirmed by Cappellini (2021). Third, for the F1L-based projects, we draw comparisons between teaching French and teaching Mandarin Chinese. This will enable us to understand which characteristics are specific to each language and how they impact DVC pedagogical interaction. Finally, in order incorporate eye-tracking technology into our study of TSP competence, we have implemented the ecological approach developed from small case studies on the topic (Cappellini & Hsu 2022). Studies are currently being published and are available for consultation.10

Conclusion

40We began this article by revealing gaps in the literature regarding teacher training in the field of online language tutoring. This enabled us to provide the theoretical grounding for the VAPVISIO project. We then provided a detailed account of the methodological choices and procedures that we followed during the construction of the VAPVISIO corpus. This should allow other researchers to replicate our protocol for data collection and semi-automatic annotation. Our multilingual and multimodal corpus was constructed in order to study the development of TSP competence by tutors and learners interacting in different DVC environments. One innovative aspect of our corpus is that it includes eye-tracking data that are integrated directly into the multimodal annotations, providing a window into how the multimodality of the DVC environment is perceived and used during the interaction itself (Cappellini & Hsu 2022). We hope that this corpus will be used to explore new research questions beyond the ones that it was originally designed to answer. For example, our corpus lends itself well to the study of interactional alignment, which is an emerging topic in the SLA community (Michel & Kim 2022).

Haut de page

Bibliographie

Aranha S. & Wigham C. R. (2020). “Virtual exchanges as complex research environments: facing the data management challenge. A case study of Teletandem Brasil”, Journal of Virtual Exchange 3: 13-38.

Baltrušaitis T., Zadeh A., Lim Y. C. & Morency L.-P. (2018). “OpenFace 2.0: Facial behavior analysis toolkit”, in IEEE International Conference on Automatic Face and Gesture Recognition.

Bigi B. (2015). “SPPAS - Multi-lingual Approaches to the Automatic Annotation of Speech”, The Phonetician 111-112: 54-69.

Bigi B. & Priego-Valverde B. (2018). “Search for Inter-Pausal Units: application to Cheese! corpus”, in 9th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, 289-293.

Blache P., Bertrand R., Bigi B., Bruno E., Cela E., Espesser R., Ferré G., Guardiola M., Hirst D., Magro E.-P., Martin J.-C., Meunier C., Morel M.-A., Murisasco E., Nesterenko I., Nocera P., Pallaud B., Prévot L., Priego-Valverde B., Seinturier J., Tan N., Tellier M. & Rauzy S. (2010). “Multimodal Annotation of Conversational Data”, in The Fourth Linguistic Annotation Workshop, ACL 2010: 186-191.

Boersma P. & Weenink D. (2001). Praat: doing phonetics by computer [Computer program], Version 6.0.37, retrieved 14 March 2018 from http://www.praat.org/.

Cappellini M. (2014). Modélisation systémique des étayages dans un environnement télétandem pour le français et le chinois langues étrangères. Une étude interactionniste et écologique du soutien au développement de la compétence de communication. Université Lille 3 SHS. Unpublished PhD thesis. https://hal.archives-ouvertes.fr/tel-01392190.

Cappellini M. (2020). Télécollaboration et formation de formateurs en langues au tutorat en ligne. Un état de l’art, ALSIC 23: https://journals.openedition.org/alsic/4642.

Cappellini M. (2021). “Une approche multimodale intégrant l’oculométrie pour l’étude des interactions télécollaboratives par visioconférence, Les Cahiers de l’ASDIFLE 31: 99-120.

Cappellini M. & Azaoui B. (2017). “Sequences of normative evaluation in different pedagogical settings through desktop videoconference”, Language Learning in Higher Education 7(1): 55-80.

Cappellini M. & Hsu Y.-Y. (2022). “Multimodality in Webconference-Based Tutoring: An Ecological Approach Integrating Eye-Tracking”, ReCALL Journal 34(3): 255-273.

Cohen C. (2017). “Former à l’enseignement en ligne”, in N. Guichon & M. Tellier (eds.) Enseigner l’oral en ligne: Une perspective multimodale. Paris: Didier, 218-242.

Conklin K., Pellicier-Sanchez A. & Carrol G. (2018). Eye-tracking. A guide for applied linguistics research. Cambridge: Cambridge University Press.

Dooly M. (2010). “Teacher 2.0”, in S. Guth & F. Helm (eds.) Telecollaboration 2.0. Bern: Peter Lang, 277-303.

Fuchs C., Hauck M. & Müller-Hartmann A. (2012). “Promoting learner autonomy through multiliteracy skills development in cross-institutional exchanges”, Language Learning & Technology 16(3): 82-102.

Guichon N. (2012). Vers l’intégration des TIC dans l’enseignement des langues. Paris: Didier.

Guichon N. & Tellier M. (eds.) (2017). Enseigner l’oral en ligne. Une approche multimodale. Paris: Didier.

Hampel R. & Stickler U. (2005). “New skills for new classrooms: Training tutors to teach languages online”, Computer-Assisted Language Learning 18(4): 311-326.

Hampel R. & Stickler U. (2012). “The use of videoconference to support multimodal interaction in an online language classroom”, ReCALL 24(2): 116-137.

Hampel R. & Stickler U. (eds.) (2015). Developing Online Teaching Skills. New York: Palgrave Macmillan.

Hauck M. (2010). “Telecollaboration: At the interface between multimodal and intercultural communicative competence”, in S. Guth & F. Helm (eds.) Telecollaboration 2.0. Bern: Peter Lang, 219-244.

Kessler G. (2016). “Technology standards for language teacher preparation”, in F. Farr & L. Murray (eds.) The Routledge handbook of language learning and technology. London: Routledge, 57-70.

LPL – Laboratoire Parole et Langage - UMR 7309 (2021). VAPVISIO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1.1, https://hdl.handle.net/11403/vapvisio/v1.1.

Michel M. & Kim Y.JI. (eds.) (2022). Special issue on Linguistic alignment in Second Language Acquisition: occurrences, learning effects, and beyond. Special issue of System Journal, https://www.sciencedirect.com/journal/system/special-issue/1050NHRJLLN.

O’Dowd R. & O’Rourke B. (2019). “New developments in virtual exchange in foreign language education”, Language Learning & Technology 23(3): 1-7.

Shi L. & Stickler U. (2021). “Eyetracking a meeting of minds: teachers’ and students’ joint attention during synchronous online language tutorials”, Journal of China Computer-Assisted Language Learning 1(1): 145-169.

Shi L, Stickler U. & Lloyd M. E. (2017). “The interplay between attention, experience and skills in online language teaching”, Language Learning in Higher Education 7(1): 205-238.

Sloetjes H. & Wittenburg P. (2008). “Annotation by category – ELAN and ISO DCR”, in Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). www.mpi.nl/publications/escidoc-60774.

Zadeh A., Baltrušaitis T. & Morency L.-P. (2017). “Convolutional experts constrained local model for facial landmark detection”, Computer Vision and Pattern Recognition Workshops.

Zhang K., Zhang Z., Li Z. & Qiao Y. (2016). “Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks”, IEEE Signal Processing Letters 23(10): 1499-1503.

Haut de page

Annexe

List of abbreviations

AOI: Area of interest

CALL: Computer-assisted language learning

CMC: Computer-mediated communication

DVC: Desktop videoconferencing

F1L: Français en (Première) Ligne

IPU: Inter-pausal units

TSP: Techno-semio-pedagogical

Haut de page

Notes

1 Vers une approche comparative de l’apprentissage/enseignement des langues étrangères par visioconférence pour développer les compétences techno-semio-pédagogiques d’enseignants en formation. Our translation: Towards a comparative approach to second language learning/teaching through desktop videoconferencing to develop student teachers’ techno-semio-pedagogical competencies. https://anr.fr/Projet-ANR-18-CE28-0011.

2 Due to lack of space, in this article we will not be able to expand on the semiotic dimension of DVC CMC. The interested reader can refer to Cappellini & Hsu (2022) for a thorough methodological discussion based on the VAPVISIO corpus.

3 Our translation.

4 http://icar.univ-lyon2.fr/projets/ismael/corpus.html

5 The pedagogical activities suggested to the learners for interaction are available at https://amubox.univ-amu.fr/s/rHC6qG7n8mMHyso.

6 https://audacity.fr/

7 An example of our data is available at https://www.ortolang.fr/market/corpora/vapvisio.

8 https://www.tobiipro.com/fr/produits/tobii-pro-studio/

9 Available at https://www.ortolang.fr /market/corpora/sldr000873.

10 https://hal.archives-ouvertes.fr/search/index?q=vapvisio

Haut de page

Table des illustrations

Titre Figure 1. Screen capture of ELAN
URL http://journals.openedition.org/corpus/docannexe/image/7440/img-1.png
Fichier image/png, 261k
Haut de page

Pour citer cet article

Référence électronique

Marco Cappellini, Benjamin Holt, Brigitte Bigi, Marion Tellier et Christelle Zielinski, « A multimodal corpus to study videoconference interactions for techno-pedagogical competence in second language acquisition and teacher education »Corpus [En ligne], 24 | 2023, mis en ligne le 15 janvier 2023, consulté le 04 juin 2023. URL : http://journals.openedition.org/corpus/7440 ; DOI : https://doi.org/10.4000/corpus.7440

Haut de page

Auteurs

Marco Cappellini

Aix-Marseille Université et Laboratoire Parole & Langage (UMR 7309).

Benjamin Holt

Université de Lille et Laboratoire Savoirs, Textes, Langage (UMR 8163).

Brigitte Bigi

CNRS et Laboratoire Parole & Langage (UMR 7309).

Marion Tellier

Aix-Marseille Université et Laboratoire Parole & Langage (UMR 7309).

Christelle Zielinski

Centre de Ressources Expérimentales, Institute of Language, Communication and the Brain.

Haut de page

Droits d’auteur

Tous droits réservés

Haut de page
  • Logo Revues électroniques de l’université de Nice
  • OpenEdition Journals
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search