Navigation – Plan du site

AccueilVolumesVol. 24, n° 1Machine Translation in Foreign La...

Machine Translation in Foreign Language Writing: Student Use to Guide Pedagogical Practice

Les traducteurs en ligne et l'écriture en langue étrangère – Usages étudiants pour guider la pédagogie
Emily A. Hellmich

Résumés

Le traducteur en ligne (par exemple, GoogleTranslate, Yandex, DeepL Translate), un outil de plus en plus accessible, efficace et précis, a d'importantes conséquences pour l'apprentissage des langues étrangères. La recherche et la pédagogie axées sur les traducteurs en ligne se basent souvent sur les sondages menés auprès des enseignants et des étudiants ainsi que les pratiques et les usages déclarés de ces derniers. Nous en savons moins sur les comportements et usages réels des étudiants qui se servent des traducteurs en ligne pour les tâches de production écrite. Le but de cette étude est de peaufiner les recommandations pour l'intégration des traducteurs en ligne au sein de l'enseignement des langues étrangères. L'étude s'appuie sur l'analyse des comportements à l'ordinateur de 26 apprenants universitaires de français langue étrangère (FLE). La tâche que nous leur avons donnée était une courte composition écrite (150 mots) adaptée au niveau débutant. Diverses données ont été collectées : capture vidéo d'écran, entretiens rétrospectifs et entretiens généraux. Ces données ont été soumises à une analyse d'"incidents critiques" : un incident qui influence le déroulement de l'apprentissage et de l'enseignement. Dans le contexte de cette étude, une telle analyse permet l'identification des actions et connaissances qui soutiennent et entravent l'écriture à l'aide du traducteur en ligne.

Des 26 participants, 23 se sont servis des traducteurs en ligne en écrivant. L'analyse a souligné plusieurs actions, connaissances et motivations qui ont soutenu ou entravé l'écriture à l'aide du traducteur en ligne, dont les principales sont discutées dans l'article. Les actions et les connaissances favorables ont compris une fine connaissance des limites du traducteur en ligne et des pratiques qui ont cherché à combler ces limites (par exemple, une entrée appropriée et/ou l'analyse rigoureuse de la sortie). D'un autre côté, les entrées inappropriées (trop courtes, trop longues), l'utilisation acritique des résultats et la perception d'un manque de temps ont entravé l'écriture en langue étrangère. Ces actions et connaissances sont illustrées à l'aide de quatre "incidents" tirés des données.

Il ressort de cette étude plusieurs recommandations pour l'intégration des traducteurs en ligne au sein des cours de langue. Par exemple, tandis qu'une discussion des avantages et limites des traducteurs en ligne fait déjà partie des recommandations pédagogiques à ce sujet, cette étude souligne le besoin d'aller plus loin, notamment un entrainement rigoureux aux traducteurs en ligne, qui mette en pratique ces avantages et limites. Par exemple, cet entrainement devrait prendre en compte l'entrée et la sortie de l'outil — comment rechercher à l'aide d'un traducteur en ligne et comment analyser les résultats. Ces recommandations permettront aussi de faire travailler la littératie numérique critique chez les apprenants.

Il ressort également de cette étude plusieurs recommandations pour de nouvelles pistes de réflexion et de recherche, y compris comment l'usage des traducteurs en ligne diffère selon le niveau de l'apprenant et par langue.

Haut de page

Texte intégral

This publication was funded by a CERCLL Faculty Research Fellowship from the Center for Educational Resources in Culture, Language and Literacy.

1. Introduction

1Machine translation (MT) is the use of software to automatically translate text from one language to another (Qun & Xiaojun, 2015, p. 105). The latest iterations of MT are faster, more efficient, and provide more accurate translations in languages with sufficient databases (Kelleher, 2019; Lewis-Kraus, 2016; Poibeau, 2017; Wu et al., 2016). For example, Google launched a new version of its Google Translate platform that uses a form of deep learning in 2016 (Le & Schuster, 2016). The improved performance of this updated version has been documented by the MT industry (Lewis-Kraus, 2016; Wu et al., 2016) as well as language teaching/learning professionals (Briggs, 2018; Ducar & Schocket, 2018; Stapleton & Kin, 2019).

2In the realm of language teaching and learning, students have reported wide-spread use of MT tools (Bourdais & Guichon, 2020; Clifford et al., 2013; Jolley & Maimone, 2015; O'Neill, 2019; White & Heidrich, 2013). Studies have also found that students report using MT for different purposes (eg, vocabulary, writing, double checking their work) (Bahri & Mahadi, 2016; Bourdais & Guichon, 2020; Clifford et al., 2013; Larson-Guenette, 2013; O'Neill, 2019) and for a range of reasons, including lack of confidence and MT tools' speed (Bahri & Mahadi, 2016; Jin & Deifell, 2013; Larson-Guenette, 2013; O'Neill, 2019). This reported use remains high despite students' parallel concerns about the accuracy of machine translation (Jolley & Maimone, 2015; O'Neill, 2019; White & Heidrich, 2013).

3Instructors, however, remain reticent about the use of MT in the foreign and second language classroom, with limited overall integration of these tools into language teaching and learning (Briggs, 2018; Hellmich & Vinall, 2021). Indeed, many instructors ban or significantly limit MT tool use by students, citing concerns over cheating and detrimental impacts on learning (Briggs, 2018; Case, 2015; Clifford et al., 2013; Correa, 2011; Hellmich & Vinall, 2021; Jolley & Maimone, 2015; Niño, 2009).

4For instructors who express interest in integrating MT into their classroom practices, some pedagogical suggestions are available. Instructors are encouraged to explicitly discuss the strengths and weaknesses of machine translation tools (Ducar & Schocket, 2018), perhaps through the use of different translation activities (ie, pre-editing, post-editing) (Correa, 2014; Jiménez-Crespo, 2017; Niño, 2008). Other pedagogical suggestions include introducing students to additional online resources and training students on the use of these resources, including MT tools (Ducar & Schocket, 2018; Jolley & Maimone, 2015; White & Heidrich, 2013).

5These suggestions are valuable: given MT's prevalence in society today, its wide-spread student use, and its increasing accuracy, it is no longer prudent or tenable to ban MT in the language learning classroom. However, most of the pedagogical guidelines currently available are based on reported student use–that is, how students think they use MT tools. An understanding of how students actually engage with machine translation tools could lead to the development and dissemination of more tailored pedagogical strategies that reflect specific student practices.

1.1. Computer Tracking

6Calls to study how students use different technological software have been numerous (Chun, 2013; Fischer, 2007; Hamel, 2013; Hamel & Séror, 2016; Mroz, 2014). Computer tracking technologies–screen recording, eye trackers, key stroke and data logs–offer one path to answering this call: with these tools, researchers have the opportunity to observe how participants engage with technological tools and platforms (Caws & Hamel, 2016; Hamel, 2012).

7A few computer tracking studies have examined how foreign and second language writers use machine translation tools. Garcia and Pena (2011), for instance, used screen capture technology and keystrokes to study how MT tools impacted beginning-level Spanish as a Foreign Language learners' writing (n=9). Their results indicate a mixed profile: while students who used MT paused less while writing than students who did not, MT did not always result in successful interventions or edits.

8Some studies (Deifell, 2018; Tight, 2017) looked at MT within a larger ecology of online tool use. For instance, Tight (2017) used screen capture technology to trace which online tools students used and the outcomes of that use. He found that undergraduate learners of FL Spanish (n=12) frequently used platforms with MT functionality (Google Translate, SpanishDict) as well as additional online tools like online dictionaries and conjugators. Similarly, Deifell (2018) used screen-capture, stimulated recall, and interviews to examine how learners of Spanish as a Foreign Language (n=2) used online tools while writing. She found that learners' use of MT tools was often intertwined with other online tools. Moreover, student MT use was often strategic, based on detailed assumptions about the pros and cons of these tools.

1.2. Critical Incidents

9The current article looks to extend work done on how students use MT with a computer tracking study of novice-level learners of French as a Foreign Language. More specifically, this article combines computer tracking with a focus on critical incidents and the Critical Incident Technique (CIT). Broadly speaking, a Critical Incident (CI) is defined as an event that makes a "significant contribution, either positively or negatively to the general aim of the activity" (Flanagan, 1954). While originally deployed in the context of aviation psychology, CIT has been productively used in a range of fields, from psychology to library sciences to user experience research (Farrell & Baecher, 2017; Hughes, 2007; Oishi, 2017).

10In education research, CIT has been used to examine interactions, disruptions, and tensions in instruction and learning. In educational contexts, critical incidents are events that impact the targeted flow of teaching and learning processes (Farrell & Baecher, 2017; Tripp, 2011). Importantly, the onset of a critical incident represents an opportunity to reflect on and analyze these teaching and learning processes. For instance, in foreign and second language research, Finch's use of CIT identified key moments in the emergent language learning process and suggested how to maximize positive CIs and minimize negative ones (Finch, 2010). Fuchs (2019) turned CIT on a Hong Kong-Germany telecollaboration to examine interactional breakdowns and to subsequently improve virtual exchanges.

11In the context of this article, critical incidents are defined as the actions (practices with MT tools) and cognitive processes (motivations to use MT tools, knowledges of MT tools) that support or hinder the use of MT in foreign language writing.

1.3. Ecological Theoretical Approach

12The study is anchored in an ecological theoretical perspective. In contrast to theoretical approaches that conceptualize language learning as a rigid and abstract process within closed systems, ecological approaches conceptualize language learning as occurring within complex and dynamic systems characterized by non-linearity, emergence, and relationality (Kramsch, 2002; Larsen-Freeman, 2013, p. 1). An ecological approach to technology and language teaching and learning examines the use of technology from the lens of ecosystems: the various factors (eg, experience, beliefs, platform design, policy) that interact across scale levels (eg, individual, classroom, institution, society) to impact how language learners use digital technologies (Blin, 2016, p. 75). In the context of the current study, this theoretical approach acknowledges that student use of MT does not occur in a vacuum but is influenced by a range of factors. Said another way, the critical incidents that occur while students use MT are not only related to the tools themselves but stem from complex interactions between tools, students, and the learning environment. In approaching critical incidents from this perspective, the current article endeavors to identify both behaviors and mindsets that can then be used as a basis for pedagogical strategies that address MT in the language classroom.

1.4. Research Question

13What actions and cognitive processes characterize how novice learners of French as a Foreign Language use MT for writing?

2. Research Design

2.1. Context and Participants

14This article is drawn from a larger computer tracking study of how foreign language learners (French, Spanish, Mandarin) use online resources when writing. The current study focuses on the French language learners who participated in this larger project. The core of the study was a writing task: participants were asked to complete a short written essay in the target language while their screen was recorded and to reflect on that task through a retrospective recall and post interview (additional details below). The writing task (Appendix) was created to match students' proficiency level and to mirror activities that students would typically encounter at this level (Joyce, 1997, p. 59). All study components were completed via Zoom.

15Participants were recruited from French courses at two institutions in the United States: one large public university (Arizona) and one community college (California). An invitation to participate in the research study was distributed to students via their class email listservs and websites. A total of 26 French language learners completed study sessions (see Table 1 for demographic breakdown of participants). Participants' written proficiency level was assessed based on the writing task done in the study session. Student proficiency levels can be found in Table 1 (Council of Europe, 2020).

Table 1–Participant summary.

Total Participants: 26

Gender

Female

Male

20

6

Institution

Public University

Community College

21

5

Proficiency

A1

A2

9

17

Age

18-22

23-26

27-35

36-49

50+

21

3

0

0

2

2.2. Data Collection

16The primary data sources used in this study were: screen recordings of participants completing the writing task; a retrospective recall; and a post-interview.

2.2.1. Screen Recording

17While participants completed the writing task, the session was recorded using two separate systems: Screencastify and Zoom. The double screen capture technique was implemented to enable the remote completion of the study. The Zoom recording, available only after the entire session was completed, was used by researchers for analysis. The Screencastify recording, available immediately following the task completion, was used as the basis for the retrospective recall. The total amount of task time recorded was 340 minutes, with an average of 13 minutes per participant.

2.2.2. Retrospective Recall

18A retrospective recall asks participants to narrate a previously-completed task or event (Zhang & Zhang, 2020). When used in language learning and teaching contexts, this methodological approach allows researchers to gather information about learner thought processes during a particular event (Gass & Mackey, 2016, p. 21) while mitigating the risks associated with simultaneous narration (Bowles, 2018; Zhang & Zhang, 2020).

  • 1 See Marsden, & Mackey., 2011.

19To help students narrate their cognitive processes during the task itself, the retrospective recall was completed immediately after the task completion. In addition, specific training was provided to participants to encourage them to narrate that specific moment, as opposed to their current interpretation (Bowles, 2018; Gass & Mackey, 2016). (The full protocol for the retrospective recall is available on IRIS1, a repository of applied linguistic data collection instruments.) Finally, participants were shown specific video-taped moments from their task completion, to help them return to that specific moment.

20These specific moments were identified by the researchers as the participants completed the task. These moments of interest were called "transactions":

instances which expressed an immediate need on the part of the writer and his or her efforts to respond to a problem as identified through a series of visual signals in the screen recordings (for example, a pause, followed by the deletion of a word and the insertion of a new word, followed by another pause before continuing to write another sentence) (quoted from Hamel and Séror (2016), in reference to a study by Park and Kinginger, 2010).

21For this study, transactions that hinged on leaving the composition document to use online tools were the focus.
All retrospective recalls were video recorded via Zoom. The video files and the accompanying transcripts (automatically produced by Zoom and manually reviewed by a research assistant) were used as the basis for analysis. These data totaled 390 minutes of video (with an average of 14 minutes per participant) and 470 pages of transcripts.

2.2.3. Post-interview

22Semi-structured post-interviews offered an opportunity to triangulate data from the screen recording observation and retrospective recall (Patton, 1990; Spradley, 1979). Broad categories included behavioral questions (what tools they use, how often, and how), attitudinal questions (tool preference, satisfaction with tools and results), and opportunities to follow-up on particular reactions or actions observed in the task completion or retrospective recall (Caws & Hamel, 2016; Hamel & Caws, 2010; Kuniavsky, 2003).

23All post-interviews were video recorded via Zoom. Transcripts (automatically produced by Zoom and manually reviewed by a research assistant) were used as the basis for analysis. The interviews produced 312 minutes of audio (with an average of 12 minutes per participant) and 440 pages of transcripts.

2.3. Data Analysis

24Methodologically, critical incidents were identified through iterative coding of the data (Miles & Huberman, 1994). Critical incidents were defined as the actions (practices with MT tools) and cognitive processes (motivations to use MT tools, knowledges of MT tools) that supported or hindered MT use and written text production. This definition of critical incidents included errors, as they are traditionally defined–grammatical, morphological, syntactic, or pragmatic mistakes. Importantly, however, this definition also extended beyond errors to include actions and mindsets that supported or hindered meaning-making more broadly. For instance, actions or motivations that led to more successful written texts–that is, texts that were better able to communicate the author's intent, despite the presence of some traditionally-defined errors, would be considered as supporting writing, as they supported meaning-making more broadly. This choice was made to align with both a theoretical stance on what constitutes language (ie, language as more than production of rigid code but as a system of meaning-making) (Blommaert, 2010; García, 2009; Kramsch, 2014) and a pedagogical stance on what constitutes language learning (ie, communicative language teaching trends that focus on negotiation of meaning despite imperfect language skills) (Lightbown & Spada, 2011).

25CIs were identified from both student and researcher perspectives (Farrell & Baecher, 2017; Finch, 2010; Fuchs, 2019)–that is, CIs were identified based on student perceptions of what helped or hindered their use of MT tools, drawn from the stimulated recall and post-interviews, and on research analyses of the task completion, retrospective recall, and post-interview. CIs relating to actions were primarily identified in the task completion recordings. CIs relating to cognitive processes stemmed primarily from the retrospective recalls and post-interviews. While this article focuses specifically on the CIs related to MT, data were coded for all CIs.

26Once a stable list of CIs had been identified, the critical incidents were analyzed using the following coding scheme:

  • Tool choice: what tools students used, why students used tools

  • Tool input: what students put into tool, what students intended to accomplish with particular tool input

  • Tool output: what students did with the output, what cognitive processes motivated what they did with tool output

  • External factors: what other factors influenced action and thinking, outside the tool

27To increase reliability of the analysis, coding was completed by multiple researchers (the author and a co-collaborator). Discrepancies in code application were identified, discussed, and resolved (Lew et al., 2018).

28Incidents were then grouped into broad categories of actions and cognitive processes that helped or hindered the writing process. (See Table 2 for a full list.) During this phase of analysis, the validity of categories was built primarily by looking for negative evidence and drawing constant comparisons to ensure that each challenge was distinct (Lew et al., 2018; Miles & Huberman, 1994).

Table 2–Critical incident categories.

Actions

Cognitive Processes

*Input

Confidence

Too little (MT like a dictionary)

Metalinguistic Awareness

Too much (MT as translator/editor)

Concerns over plagiarism

*Output

*Tool Awareness

(No) Analysis

*Time

(No) Context

*Critical incident categories described in the current paper

3. Findings

29Overall, of the 26 participants, 23 used some form of machine translation (Google Translate, Reverso Translate, Yandex). An additional two participants mentioned using machine translation but did not use it during the study session. Only one participant did not use or mention any form of machine translation during the study session.

30The analysis of student MT use revealed several categories of actions and cognitive processes that impacted writing at the novice level. The current article focuses on the most prominent actions and cognitive processes, that is those that were most commonly seen across the student sample. (Limitations of this approach–discussing broad trends in the participant sample–and how to mitigate such limitations are discussed in the penultimate section of the article.) The most prominent actions and cognitive processes were: what students put into MT tools (too much vs too little input); what students did with MT output (no analysis/context-seeking vs analysis/context-seeking); how aware students were of MT tools capabilities; and time as a motivation to use MT.

31Importantly, these actions and cognitive processes were often related and overlapping, with use practices being driven by motivations and vice versa. To respect this complexity, the findings are organized around four illustrative examples from the data (presented in italics) that showcase the principal findings in the data.

3.1. Masha: Struggles with Input, Output, and Tool Awareness

32Toward the beginning of her writing task, second-year student Masha (all names are pseudonyms) paused and stared at her written work, the cursor blinking. She was not sure on how to say that there were many cities near Phoenix in French (Figure 1a). Masha opened Google Translate and typed “near” into the Google Translate search box (Figure 1b). Without a pause or review, she swiped back to her writing document, typing the result she had seen and completing her sentence (Figure 1c).

Figure 1–Screenshots from Masha's task completion.

Figure 1–Screenshots from Masha's task completion.

33This strategy of consulting Google Translate for single words was common throughout Masha's task completion. Indeed, she described this approach as an intentional one:

I feel like Google Translate is not like something we should rely on. So I just usually try to use it to look up a word, not like a whole sentence. Or if I like look up a whole sentence I try to look at like the grammar, but a lot of times I know it [Google Translate] doesn't translate accurately.

34Masha's use of Google Translate illustrates several of the prominent trends in critical incidents with machine translation observed in the data. First, Masha input too little into the machine translation platform to get a reliable result. This action occurred with more than half of the participants. For more accurate results, the algorithms that undergird current versions of machine translation require longer strings of input in order to discern the "context" of the query, defined in machine learning as proximity to the target word in the large data bases of language that drive MT processes (Poibeau, 2017).

35Inputting too little caused students layered challenges in their writing. For Masha, this type of use led to missing a component of the target form–in this case, a vital preposition (de). For others, putting too little into the MT search engine resulted in different types of obstacles to meaning-making, such as lack of agreement (eg, looking up "food," rather than "the food," results in "nourriture," without an indication of the noun's gender); mistaking nouns for verbs (eg, looking up "swim," rather than "to swim," results in "natation" rather than "nager"); and missed translations (eg, looking up "home" for the more sentimental, intangible sense of "my home" results in "maison").

36This action, too little input, was coupled with a second action that hindered Masha's writing experience in French: uncritically using the machine translation's output. Indeed, for many student participants, a significant obstacle to meaningful text production through MT stemmed from not analyzing the output produced by MT tools. Rather than examining the output or seeking additional information to make an informed decision, students who did not analyze MT output accepted this output as correct and transferred it automatically to their composition documents. This action was often explained in stimulated recalls, where students acknowledged a higher authority for MT than for their own abilities. For instance, Anne explained why she automatically transferred an MT result to her composition document: "I just believed it [Google Translate]."

37Lastly, Masha's use of Google Translate, both in inputting too little text and in not analyzing the results, illustrates a mindset that hindered many students in their use of MT for foreign language writing: a limited awareness of MT tool capabilities. While Masha was aware that MT was an imperfect tool, this awareness did not extend to what to input into the tool or the need to analyze results more closely. This lack of tool awareness was also common amongst student participants (approximately half of participants) and took different forms for different students, from uncertainty about what to put into MT tools to what to do with MT output to what tools to use at all.

3.2. Jamie: Input, Output, and Tool Savvy

38Jamie was completing the sentence "San Diego est mon ville préfére [sic] parce que…" She paused after "parce que" and then typed "il y a" (Figure 2a). She then immediately opened an internet browser window and navigated to Google Translate. In the input window, she typed "there are many" (Figure 2b). She paused for several seconds and then changed her original input to "there are some" and then to "there are lots of beaches" (Figure 2c). She paused again and then returned to her document, where she completed the sentence with "beaucoup de plages" (Figure 2d).

Figure 2–Screenshots from Jamie's task completion.

Figure 2–Screenshots from Jamie's task completion.

39Jamie described in her retrospective recall this approach:

Researcher: And I see you're trying out a couple of different options. "There are many." And "there are some" and what were you thinking?

Jamie: So a lot of times when I use Google Translate, especially for sentences or sentence fragments, I try a couple different ways of phrasing it because Google Translate sometimes picks different ways to say it or different tenses to say it in which are not correct for what I'm doing.

40Soon after her search for "il y a," Jamie returned to Google Translate. She had been reading the next prompt question: "what are the best places to visit in the city?" (Figure 3a). She typed in "the beach" into the input box (Figure 3b). Jamie explained in her retrospective recall that

I looked it up because I didn't know if it was masculine or feminine or not. And that's why specifically I typed "the [emphasis added] beach," because if you just type in a word, it'll just give you the word without the—I forget what you call it, but it doesn't say whether it's masculine or feminine, so I always type in "the."

Figure 3–Additional screenshots from Jamie's task completion.

Figure 3–Additional screenshots from Jamie's task completion.

41In contrast to Masha, Jamie demonstrated actions and cognitive processes that supported her use of machine translation for foreign language writing, namely an awareness of the tool's capabilities, combined with actions to take into account those capabilities. For instance, Jamie altered the input into the tool (Figures 3a-b) to take into account the fact that Google Translate would only give the article for the desired noun with specific prompting. Moreover, Jamie's repeated searches in the first example (Figures 2a-d) show an understanding of what appropriate input looks like for MT in addition to actions taken to mitigate idiosyncrasies of the program algorithms.

42Jamie was representative of students who knew how to play with input to get a more accurate translation (approximately a third of participants). Julie, for instance, shifted her input ("I think it is pretty," to "I think Paris is pretty") because "I know Google Translate probably can't track it [the gender of the target noun]." Ellie, in search of the correct third person plural conjugation of "visiter" typed in "many people visit" into Google Translate in order to "get the right context."

43Jamie also demonstrated engagement with the output of MT that was characteristic of students who were more successful in leveraging MT for their foreign language writing: in the first example, she reviewed the different outputs of her searches carefully, demonstrating an analysis of the results of machine translation as a way to take into account the idiosyncrasies of one-off translations. Students who engaged with MT output did so in several ways, including the careful review seen with Jamie (seen in approximately 40% of participants). Susan described this analysis as follows:

I trust myself a little bit more than I trust Google Translate. So if [MT] gave me something that made no sense to me, I would just probably either change my sentence or put something else or put what I had originally thought was correct.

44A "correct" product was not always achieved, even when students had a firm grip on MT tool capacities or when they analyzed the MT output. For instance, after doing several searches in Google Translate for "weather" as in "the weather is beautiful", Susan selected "météo," which carries a more technical, meteorological meaning than what she desired. That said, even when analysis did not produce an error-free text, students demonstrated more engagement with language with this practice, spending more time with the language and making informed decisions about use. In this last example, Susan's decision, while wrong, was rooted in a critical analysis: "I thought it looked kind of like the word 'meteorology,' which sounds like that makes sense to me," suggesting a cross-language metalinguistic analysis.

3.3. Sasha: MT as Editor and Cross-Referencing Output

45Ten minutes into the fifteen-minute task, Sasha had not left her composition document. As she finished her response to the last prompt question, she opened her internet browser and navigated to Google Translate. She switched the input/output so that the input was French, output English. And she pasted her entire composition into Google Translate (Figure 4a). She then proceeded to review each sentence of the English translation of her French text provided by Google Translate (Figure 4b). She explained in the retrospective recall that "I was going over the sentences to see if they made sense."

46Toward the end of this review process, Sasha paused and audibly groaned. Her cursor migrated over to the French input and hovered over "irais" (Figure 4c). She explained here that "Well I see that it's wrong but I couldn't remember how it was right. So I did another Google Translate to find the word." Indeed, she then opened a new Google Translate window and typed in "I went" (Figure 4d). She explained that

I was trying to remember if this is how I wanted to use it or if I wanted to use um imparfait? And I just decided to go with this [passé compose]. Because I was going to add my age, so I thought that that would be ok.

Figure 4–Screenshots of Sasha's task completion.

Figure 4–Screenshots of Sasha's task completion.

47Sasha's technique–to use machine translation as an editing tool for large swatches of text she had composed–was not fail-proof: Google Translate did not pick up on many of the errors in Sasha's text. In other words, the English translation of her written French did not communicate to her the errors in her text, contrary to her desire. This shows another side of the input issue students faced with use MT for foreign language writing: inputting too much text, especially student-composed text comprised, was not an effective strategy for identifying errors in writing.

48Students who relied on this approach to machine translation (approximately one third of participants) did so in several ways. Some, like Sasha, waited until the end of their writing process in French to check their entire text. Others, like Mary, checked sentence by sentence–inputting what they had written in French into a machine translation tool, checking the English translation provided by MT, making edits to their French text, and moving on to composing the next sentence. Two students took a more in-the-moment approach, writing their entire composition in the machine translation window and keeping a constant eye on the English translation that was produced. Only one student in the sample used MT to translate entire sentences that they had written in English into French.

49Parallel to this action was another facet of students' lack of awareness of MT tool capabilities: students like Sasha who used this approach often cited a desire for "instant feedback" and assumed that the tools were able to pick up on many of the issues in their written texts. While this approach may have led students to identify some problems in their texts, it was not able to catch them all.

50That said, Sasha also showcased a particular output-analysis strategy that supported her use of MT for foreign language writing: cross-referencing output from machine translation tools with additional searches. Approximately one third of students relied on this kind of action to assess what they received from machine translation tools. Sasha continued to rely on Google Translate in this instance, but other participants drew on additional tools. Elsa, for instance, preferred going to Word Reference or Wikipedia in French to cross-reference results from Reverso Translate because these tools "provide semantic differences" for specific words and phrases.

3.4. Eliza: Time and Tools

51After the task completion and retrospective recall, the researcher was digging deeper into how and why Eliza used the online tools that she did while writing in French:

Researcher: Some of the ways that you use Google Translate in the activity was looking up a word like "ville" or like "famous." Why would you not choose Word Reference in that situation?

Eliza: Um, because Google Translate was right there [snaps fingers]. I know like it sounds so lazy and that's the honest honestness of it is that you search up "French to English" [in internet browser] and it's right there. It doesn't require you clicking another browser and opening another browser. That sounds so stupid, but honestly that is what's going through a kid's head. They just, it's right there in front of them.

52Eliza showcased a final mindset that pushed students to use MT: in the retrospective recalls and in the post-interviews, over half of student participants expressed the perception that MT was easier, faster, and more efficient to use than other online resources available to them. Kelsey explained simply that, with Google Translate, "there's less buttons to click."

53A lack of time also motivated students to rely on MT. Many students, especially those who had drawn on a range of tools throughout their task completion, situated their increasing use of MT as the task went on to a "time crunch," not having the time needed to use other tools, as Ellie did here when explaining her use of Google Translate at the end of her task completion:

Honestly, at this point, I was kind of trying to be fast. To get as much in as like–I feel like that's why I went to Google Translate for those three words, cuz I was at a time crunch and I just wanted to get all the words down.

54The perceived speed and efficiency of MT tools related to a final component that drove students to MT tools: not having the time to parse the information provided by other tools. Later in her post-interview, Eliza elaborated on this while describing looking up a phrase with "être" on Word Reference:

And by plugging it in, you're clearly not sure what it's even trying to say in the first place. So for [Word Reference] giving you all these different options, it's very overwhelming and you're like, "well, which am I supposed to use? I didn't even know what this word meant, you know?" So that's why I like Google Translate sometimes because even though it's not always the most accurate, it definitely is quicker.

55Anne echoed this sentiment, explaining simply: "What I like about Google Translate is it gives you the answer. Word Reference gives you like, 'in the adjective, it's this. Or in the verb, it's this. Or the whatever, it's this.'" In other words, a factor/mindset that pushed students to use MT, even when they were aware of its drawbacks, was a perceived lack of time to parse the metalinguistic information provided by other tools.

4. Discussion

56This article set out to use computer tracking technology and critical incidents to explore how novice-level learners of French as a Foreign Language used MT for writing. More specifically, the article investigated the actions and cognitive processes that supported or hindered these student writers as a way to guide the future development of pedagogical practice for the integration of MT into language teaching and learning.

57Student participants engaged in a range of practices, driven by diverse mindsets and knowledges, as they completed the writing task. The most prominent categories of actions and cognitive processes that supported foreign language writing were a specific awareness of MT tool limitations paired with appropriate action, namely putting in a sufficient amount of input and analyzing or cross-referencing the results. Conversely, students encountered more challenges in the writing process when they put too little or too much input into MT tools and when they failed to analyze the output. These actions that hindered the writing process were often paired with cognitive processes that centered on time: both the speed/efficiency of MT tools and a lack of time needed to parse the more complex output provided by other tools.

4.1. Extending Understandings of Student Use

58The findings corroborate and extend previous survey-based reports of student use. For instance, this computer-tracking study confirms that students frequently use MT tools when writing (Bourdais & Guichon, 2020; Clifford et al., 2013; Jolley & Maimone, 2015; O'Neill, 2019; White & Heidrich, 2013). The data also shed additional light on purposes of student uses of MT. For instance, student participants did indeed use MT to look up vocab, as has been found in previous student report research (Bahri & Mahadi, 2016; Bourdais & Guichon, 2020; Clifford et al., 2013; O'Neill, 2019). However, the focus on critical incidents revealed the challenges that can arise when students relied on too little input into MT tools or when they did not analyze the results of their vocabulary searches.

59In addition, student participants also used MT to double check their work, in line with some previous survey research on student use (O'Neill, 2019). That said, the observations of students using MT software indicate that double checking work took diverse forms, including, as we saw with Sasha, inputting large swatches of self-composed text into MT. Relatedly, only one student used MT to translate text they had written in English into French. On the one hand, this finding may assuage some teacher concerns that students use MT to cheat or rely on it too heavily for writing (Correa, 2011; Hellmich & Vinall, 2021; Jolley & Maimone, 2015). On the other, there are important limitations to the lab-based study design that would suggest caution in overextending this finding, to be discussed in more detail in the penultimate section of the paper.

60The data of the current study also corroborate student concerns, reported in survey research, on the accuracy of MT tools (Jolley & Maimone, 2015; O'Neill, 2019; White & Heidrich, 2013). However, the usage data also indicate that, while student participants may have been skeptical of MT tools, they often did not have the skills or practices to concretely take account for MT tool limitations, which led to obstacles in their writing processes. Part of this gap may have been related to the low proficiency level of the student sample, a limitation discussed below.

61Finally, the cognitive processes that undergirded student use of MT also aligned with those found in some of the survey-based research. For instance, students in this study discussed their use of MT in relation to time and efficiency (Bahri & Mahadi, 2016; Jin & Deifell, 2013; Larson-Guenette, 2013). That said, the data from the retrospective recalls also add a new ingredient to this mix: a perceived lack of time to parse the information provided by non-MT tools drove students to using MT tools.

4.2. Extending Pedagogical Guidelines

62The results of this study suggest several ways to extend current guidelines on MT integration into language teaching and learning contexts, particularly for novice-level learners.

4.2.1. Strengths and Weaknesses

63Discussion of machine translation's strengths and weaknesses have been a central component of pedagogical approaches to MT in foreign and second language teaching and learning (Ducar & Schocket, 2018). However, as seen in this study, students' general awareness of the inaccuracy of MT tools was not enough to support constructive use of these tools. In addition to detailing MT's strengths and weaknesses, then, pedagogical instruction should provide students with concrete training that helps them to act in response to MT capabilities–to take advantage of its strengths and to mitigate its weaknesses.

4.2.2. Training

64Calls for incorporating training about MT into language teaching and learning are not new (Ducar & Schocket, 2018; Jolley & Maimone, 2015; White & Heidrich, 2013). However, the current study showcases in finer detail what the focus of such training might include. For instance, training students about MT tools in language learning should focus more explicitly on what students need to put into MT tools to get a more accurate result as well as what to do with the output. For what to put into MT tools, students should be guided to see what different lengths of inputs (eg, "beach" vs "the beach," "swim" vs "to swim") produce and to practice together how to maximize their input for the best translations (eg, trying several searches that seek the same target language but in different ways). Similarly, for students who use MT as a way to check their work, explicitly showing students what MT is able to pick up in their work and what it is not would be important to supporting their use of MT for writing.

65The same kind of granular training should take place for MT output–that is, explicitly showing students what to do with the results of their MT searches. A simple but effective option would be to show students the additional (meta)linguistic information they get when they leave the browser-based version of Google Translate (Figure 5).

Figure 5–(Meta)linguistic information produced by one MT platform.

Figure 5–(Meta)linguistic information produced by one MT platform.

66Another way to support student use of MT output would be modeling different ways to analyze the output of MT and then offering students opportunities to practice these analytical approaches. For instance, showing students how to do multiple searches with the same target goal (eg, "there are," "there are many things") and comparing the results; how to seek out and comb through examples or definitions to try to find the appropriate translation; or how to use additional tools, online or otherwise, to cross-reference the MT output results.

67This kind of training on what to put into MT might also include brief, non-technical explanations of how MT algorithms work. For instance, it would be helpful to name for students what drives machine translation–large data bases of naturally-occurring oral and written language, most often from dominant language varieties. Additionally, it would also be helpful to underscore, in broad strokes, how translation in deep learning versions of MT occurs: algorithms draw out patterns in the language data itself and require sufficient input to approximate the appropriate translation (Poibeau, 2017).

68Sharing these basic principles of how MT works could have multiple benefits. First, with a more fine-tuned understanding of where MT translations come from, students can better contextualize their results, situating them within particular registers and varieties. Second, this knowledge might help students to better use MT to support their writing by concretizing the actions that are needed to reliably use these tools. Finally, explaining to students the nature of MT tools can contribute to denaturalizing the technology: the seamless prevalence of technological tools in our daily lives has led to their normalization, which engenders a lack of attention to the mediating factor technologies play in various personal and professional domains (Jones, 2019; Kern, 2015). Teaching students about the innerworkings of MT–how it works, why it works–can shed light on the mediating role of these technologies in our lives and meaning-making efforts, thereby supporting students' critical digital literacies more broadly (Darvin, 2017; Hellmich, 2019).

4.2.3. Additional Considerations

69The data analyzed here suggest additional components of pedagogical guidelines for MT integration into foreign and second language teaching and learning. For instance, the weight of time in student participant use of MT might suggest a shift in assigned activities: reducing time pressure on students–such as by opting for untimed writing tasks that ask students to apply what they've learned previously or implementing a more lenient late policy–might reduce student reflexes to use MT tools or encourage students to use MT in a more thoughtful manner.

70Indeed, using MT and other online tools well requires more time and brainpower than the copy/paste desires showcased by some students. Relatedly, then, a potentially productive component of how instructors frame MT for students would be to counter the narrative of "MT as quicker." Rather, messaging and training around MT should emphasize the time required to use MT tools, as a way to encourage uses that will benefit students and their written production processes.

71This point relates to a final consideration that intersects with the broader goals and framing of language education: a focus on meaning-making, rather than form, might assuage some of the underlying motivations to use MT. Reorienting students to the negotiation of meaning through various semiotic systems, including language, might take different forms, such as adjusting grading schemes to balance accuracy with comprehensibility and appropriateness for context.

4.3. Limitations and Future Directions

72The current study is limited in several ways that offer directions for future research. First, the study only looks at novice-level learners of French. Additional research would be necessary to assess student use of MT tools across language proficiency levels and across languages. It would be particularly important to understand student use of MT for languages that are not as well supported by current MT iterations–namely languages that do not have sufficient data bases.

73It would also be beneficial to understand student use of MT tools in more naturalistic settings. The current study relied on a lab-like simulation to observe student use. A more naturalistic approach would be to document how students use MT and other online tools for foreign language writing at home. Moreover, it would be important to extend observations over time: while the current study only looked at instances of student use, a more longitudinal approach, with observations taking place over multiple writing sessions, would flesh out understanding of use.

74Another limitation to be addressed in future studies relates to ecological theoretical approaches. The current study looked to describe broad trends in the critical incidents that impacted use of MT across the student sample. While necessary for the article's overarching goal of providing insight for the future development of pedagogical guidelines, this approach does not fully contextualize student use of MT. Another approach, also anchored in an ecological theoretical stance, would be a case study methodology, which would enable researchers to tease out on an individual basis how different components of the teaching/learning ecology (eg, student technical skill and past experience, institutional policies, etc.) impact student actions and cognitive processes surrounding MT and foreign language writing.

75Larger questions also remain at the intersection of MT and language teaching/learning. It is not clear, for instance, if MT supports language learning more broadly. Nor is it clear how MT may or may not be leveraged toward other competences outside written production, such as other forms of presentational communication (eg, oral production), interpretive communication (eg, reading, listening) or intercultural communication and competence. Studies that adopt a larger scope are needed to understand the impact of MT use within the broader ecology of language learning.

76The advent of MT also has implications for theorizations of what it means to learn and know a language. For instance, the easy access to information characteristic of the current era puts into question humanist notions of cognition and knowledge residing principally in the brain and opens doors to new understandings of cognition as distributed across spaces, communities, and tools (Clark, 2010; Pennycook, 2016). Moreover, it would be worth pausing to interrogate how student use and our own pedagogical guidelines around MT reinforce or potentially challenge problematic conceptualizations of language as a system comprised of "correct" vs "incorrect." At the same time that it is important for the applied linguistics community to address MT tools at the classroom level, it is equally important to address these more macro questions, which ultimately have implications for smaller scale levels.

5. Conclusion

77The analysis of critical incidents that novice learners of French as a Foreign Language encountered when using MT presented in this article showcases the complexity of student use of MT tools, both in terms of their practices with these tools and the thinking that drives these practices. Importantly, these intertwined actions and cognitive processes offer practitioners potential next steps in how to address MT in the language classroom. Moreover, both the insights and questions raised by this portrait of student use of MT offer researchers new paths of inquiry that would expand research agendas on MT and language learning/teaching to new important territories.

Haut de page

Bibliographie

Bahri, H., & Mahadi, T. (2016). Google Translate as a supplementary tool for learning Malay: a case study at Universiti Sains Malaysia. Advances in Language and Literary Studies, 7(3).

Blin, F. (2016). Towards an "ecological" CALL theory: theoretical perspectives and their instantiation in CALL research and practice. In F. Farr & L. Murray (Eds.), The Routledge handbook of language learning and technology (p. 39-54). Routledge.

Blommaert, J. (2010). The sociolinguistics of globalization. Cambridge University Press.

Bourdais, A., & Guichon, N. (2020). Représentations et usages du traducteur en ligne par les lycéens. Apprentissage des langues et systèmes d'information et de communication (Alsic), 23(1). https://journals.openedition.org/alsic/4533

Bowles, M. A. (2018). Introspective verbal reports: Think-Alouds and stimulated recalls. In A. Phakiti, P. de Costa, L. Plonsky, & S. Starfield (Eds.), The Palgrave handbook of applied linguistics research methodology (p. 339-357). Palgrave Macmillan UK. https://link.springer.com/chapter/10.1057/978-1-137-59900-1_16

Briggs, N. (2018). Neural machine translation tools in the language learning classroom: Students' use, perceptions, and analyses. JALT CALL Journal, 14(1), 3-24. https://www.researchgate.net/publication/333030587_Neural_machine_translation_tools_in_the_language_learning_classroom_Students%27_use_perceptions_and_analyses

Case, M. (2015). Machine translation and the disruption of foreign language learning activities. ELearning Papers, 45, 4-16. https://www.diva-portal.org/smash/get/diva2:874792/FULLTEXT01.pdf

Caws, C., & Hamel, M.-J. (Eds.) (2016). Language-learner computer interactions: Theory, methodology and CALL applications. John Benjamins. https://www.jbe-platform.com/content/books/9789027266989

Chun, D. M. (2013). Contributions of tracking user behavior to SLA research. CALICO Journal, 30, 256-262. https://journals.equinoxpub.com/index.php/CALICO/article/viewFile/22903/18924

Clark, A. (2010). Supersizing the mind: embodiment, action, and cognitive extension. Oxford University Press.

Clifford, J., Merschel, L., & Munné, J. (2013). Surveying the landscape: What is the role of machine translation in language learning? @Tic. Revista D'Innovació Educativa, 10, 108-121. https://ojs.uv.es/index.php/attic/article/viewFile/2228/2184

Correa, M. (2011). Academic dishonesty in the second language classroom: instructors' perspectives. Modern Journal of Language Teaching Methods, 1(1), 65-80. https://www.academia.edu/1282480/Academic_Dishonesty_in_the_Second_Language_Classroom_Instructors_Perspectives

Correa, M. (2014). Leaving the "peer" out of peer-editing: Online translators as a pedagogical tool in the Spanish as a second language classroom. Latin American Journal of Content and Language Integrated Learning, 7(1), 1-20. http://laclil.unisabana.edu.co/index.php/LACLIL/article/download/3568/pdf

Council of Europe. (2020). Common European framework of reference for languages: learning, teaching, assessment. Council of Europe Publishing. https://www.coe.int/en/web/common-european-framework-reference-languages

Darvin, R. (2017). Language, Ideology, and Critical Digital Literacy. In S. Thorne & S. May (Eds.), Language education and technology (p. 17-30). Springer. https://www.researchgate.net/publication/314134973_Language_Ideology_and_Critical_Digital_Literacy

DeepL Translate (2017). DeepL GmbH. https://www.deepl.com/translator.html

Deifell, E. D. (2018). Dynamic intertextuality and emergent second language microdevelopment in digital space. [Doctoral dissertation, University of Iowa]. Proquest. https://ir.uiowa.edu/etd/6402/

Ducar, C., & Schocket, D. H. (2018). Machine translation and the L2 classroom: Pedagogical solutions for making peace with Google translate. Foreign Language Annals, (August), 779-795. https://onlinelibrary.wiley.com/doi/10.1111/flan.12366

Farrell, T. S. C., & Baecher, L. H. (2017). Reflecting on critical incidents in language education : 40 dilemmas for novice TESOL professionals. Bloomsbury. https://www.modernenglishteacher.com/reflecting-on-critical-incidents-in-language-education-bloomsbury-2017

Finch, A. (2010). Critical incidents and language learning: Sensitivity to initial conditions. System, 38(3), 422-431. https://www.sciencedirect.com/science/article/abs/pii/S0346251X10000813

Fischer, R. (2007). How do we know what students are actually doing? Monitoring students' behavior in CALL. Computer Assisted Language Learning, 20(5), 409-442. https://www.tandfonline.com/doi/abs/10.1080/09588220701746013

Flanagan, J. (1954). The critical incident technique. Psychological Bulletin, 51(4), 327-358. https://www.apa.org/pubs/databases/psycinfo/cit-article.pdf

Fuchs, C. (2019). Critical incidents and cultures-of-use in a Hong Kong-germany telecollaboration. Language Learning and Technology, 23(3), 74-97. https://scholarspace.manoa.hawaii.edu/bitstream/10125/44697/1/23_3_10125-44697.pdf

García, O. (2009). Bilingual education in the 21st century: A global perspective. Wiley-Blackwell. https://www.wiley.com/en-gb/Bilingual+Education+in+the+21st+Century%3A+A+Global+Perspective-p-9781405119948

Garcia, I., & Pena, M. I. (2011). Machine translation-assisted language learning: Writing for beginners. Computer Assisted Language Learning, 24(5), 471-487. https://www.researchgate.net/publication/254216820_Machine_translation-assisted_language_learning_Writing_for_beginners

Gass, S. M., & Mackey, A. (2016). Stimulated recall methodology in applied linguistics and L2 research. Routledge. https://www.routledge.com/Stimulated-Recall-Methodology-in-Applied-Linguistics-and-L2-Research/Gass-Mackey/p/book/9780415743891

Google Translate (n.d.). [Google's free service that instantly translates words, phrases, and web pages between English and over 100 other languages]. https://translate.google.com/

Hamel, M.-J. (2012). Testing aspects of the usability of an online learner dictionary prototype: A product- and process-oriented study. Computer Assisted Language Learning, 25(4), 339-365. https://www.researchgate.net/publication/254217234_Testing_aspects_of_the_usability_of_an_online_learner_dictionary_prototype_A_product-_and_process-oriented_study

Hamel, M.-J. (2013). Analyse de l'activité de recherche d'apprenants de langue dans un prototype de dictionnaire en ligne. Apprentissage des langues et systèmes d'information et de communication (Alsic), 16. https://journals.openedition.org/alsic/2613

Hamel, M.-J., & Caws, C. (2010). Usability tests in call development: Pilot studies in the context of the dire autrement and francotoile projects. CALICO Journal, 27(3), 491-504. https://journals.equinoxpub.com/index.php/CALICO/article/view/23024

Hamel, M.-J., & Séror, J. (2016). Video screen capture to document and scaffold the L2 writing process. In C. Caws & M.-J. Hamel (Eds.), Language-learner computer interactions: Theory, methodology and CALL applications (p. 137-162). John Benjamins. https://benjamins.com/catalog/lsse.2.07ham

Hellmich, E. A. (2019). A critical look at the bigger picture: macro-level discourses of language & technology in the US. CALICO Journal, 36(1), 39-58. https://journals.equinoxpub.com/CALICO/article/view/35022

Hellmich, E. A., & Vinall, K. (2021). FL instructors beliefs about machine translation: ecological insights to guide research and practice. International Journal of Computer-Assisted Language Learning & Teaching.

Hughes, H. E. (2007). Critical incident technique. In S. Lipu, K. Williamson, & A. Lloyd (Eds.), Exploring methods in information literacy research (p. 49-66). Chandos Publishing. https://www.researchgate.net/publication/237299629_Critical_incident_technique

Jiménez-Crespo, M. A. (2017). The role of translation technologies in Spanish language learning. Journal of Spanish Language Teaching, 4(2), 181-193. https://www.tandfonline.com/doi/abs/10.1080/23247797.2017.1408949

Jin, L., & Deifell, E. (2013). Foreign language learners' use and perception of online dictionaries: A survey study. MERLOT Journal of Online Learning and Teaching, 9(4), 515-533. https://jolt.merlot.org/vol9no4/jin_1213.pdf

Jolley, J. R., & Maimone, L. (2015). Free online machine translation: use and perceptions by Spanish students and instructors. 2015 Central States Conference on the Teaching of Foreign Languages, 181-200. https://silo.tips/download/this-article-reports-the-results-of-a-survey-based-study-on-the-use-of-and

Jones, R. H. (2019). The text is reading you: teaching language in the age of the algorithm. Linguistics and Education, 62, 100750. https://www.researchgate.net/publication/336446611_The_text_is_reading_you_teaching_language_in_the_age_of_the_algorithm

Joyce, E. (1997). Which words should be glossed in L2 reading materials? A study of first, second, and third semester French students' recall. Pennsylvania Language Forum, 58-64. https://files.eric.ed.gov/fulltext/ED427508.pdf

Kelleher, J. D. (2019). Deep learning. MIT Press. https://mitpress.mit.edu/books/deep-learning-1

Kern, R. (2015). Language, Literacy, and Technology. Cambridge University Press. https://www.researchgate.net/publication/269335607_Language_Literacy_and_Technology

Kramsch, C. (2002). Introduction: "How can we tell the dancer from the dance?" In C. Kramsch (Ed.), Language acquisition and language socialization: Ecological perspectives (p. 1-21). Continuum. http://www.veramenezes.com/Kramsch.pdf

Kramsch, C. (2014). Teaching foreign languages in an era of globalization: Introduction. The Modern Language Journal, 98(1), 296-311. https://www.jstor.org/stable/43651759

Kuniavsky, M. (2003). Observing the user experience : a practitioner's guide to user research. Morgan Kaufmann.

Larsen-Freeman, D. (2013). Chaos/complexity theory for second language acquisition. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics (p. 1-8). Blackwell. https://www.uibk.ac.at/anglistik/staff/freeman/course-documents/diane_chaos_paper.pdf

Larson-Guenette, J. (2013). "It's just reflex now": German language learners' use of online resources. Die Unterrichtspraxis/Teaching German, 46(1), 62-74. https://onlinelibrary.wiley.com/doi/10.1111/tger.10129

Le, Q. V., & Schuster, M. (2016, September 27). A neural network for machine translation, at production scale. Google AI Blog. https://ai.googleblog.com/2016/09/a-neural-network-for-machine.html

Lew, S., Yang, A. H., & Harklau, L. (2018). Qualitative methodology. In A. Phakiti, P. De Costa, L. Plonsky, & S. Starfield (Eds.), The Palgrave handbook of applied linguistics research methodology (p. 79-101). Palgrave Macmillan. https://link.springer.com/chapter/10.1057/978-1-137-59900-1_4

Lewis-Kraus, G. (2016, December 14). The great A.I. awakening. The New York Times. https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html

Lightbown, P. M., & Spada, N. (2011). How languages are learned (3rd ed.). Oxford University Press, USA.

Marsden, E., & Mackey A. (2011). IRIS. [Digital repository]. https://www.iris-database.org/iris/app/home/index;jsessionid=A896FA75374B74342E7EDBDB89E9E2F3

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: an expanded sourcebook (2nd ed.). Sage Publications.

Mroz, A. P. (2014). Process research screen capture. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics (p. 1-7). John Wiley & Sons, Ltd. https://experts.illinois.edu/en/publications/process-research-screen-capture

Niño, A. (2008). Evaluating the use of machine translation post-editing in the foreign language class. Computer Assisted Language Learning, 21(1), 29-49. https://eric.ed.gov/?id=EJ786006

Niño, A. (2009). Machine translation in foreign language learning: Language learners and tutors perceptions of its advantages and disadvantages. ReCALL, 21(2), 241-258. https://eric.ed.gov/?id=EJ841812

Oishi, E. (2017). Critical incident technique. theDesignExchange [Open-source innovation archive of design methods and case studies]. https://www.thedesignexchange.org/design_methods/160

O'Neill, E. M. (2019). Online translator, dictionary, and search engine use among L2 students. CALL-EJ, 020(1), 154-177. https://www.academia.edu/38360214/Online_Translator_Dictionary_and_Search_Engine_Use_Among_L2_Students

Park, K., & Kinginger, C. (2010). Writing/thinking in real time: Digital video and corpus query analysis. Language Learning and Technology, 14(3), 31-50. https://scholarspace.manoa.hawaii.edu/bitstream/10125/44225/1/14_03_parkkinginger.pdf

Patton, M. Q. (1990). Qualitative interviewing. In Qualitative evaluation and research methods (2nd ed., p. 277-368). Sage Publications.

Pennycook, A. (2016). Posthumanist applied linguistics. Applied Linguistics, 39(4), 445-461. https://academic.oup.com/applij/article-abstract/39/4/445/2544439

Poibeau, T. (2017). Machine translation. MIT Press. https://mitpress.mit.edu/books/machine-translation-1

Qun, L., & Xiaojun, Z. (2015). Machine translation: General. In C. Sin-wai (Ed.), The Routledge encyclopedia of translation technology (p. 105-119). Routledge. https://www.taylorfrancis.com/chapters/edit/10.4324/9781315749129-16/machine-translation-general-liu-qun-zhang-xiaojun

Reverso Translate (n.d.). https://www.reverso.net/

Screencastify (n.d.). [Screen Video Recorder]. https://www.screencastify.com/

SpanishDict (2016). Curiosity Media. [Spanish learning for everyone]. https://www.spanishdict.com/

Spradley, J. P. (1979). The ethnographic interview. Wadsworth.

Stapleton, P., & Kin, B. L. K. (2019). Assessing the accuracy and teachers ' impressions of Google Translate : A study of primary L2 writers in Hong Kong. English for Specific Purposes, 56, 18-34. https://www.academia.edu/39966678/Assessing_the_accuracy_and_teachers_impressions_of_Google_Translate_A_study_of_primary_L2_writers_in_Hong_Kong

Tight, D. G. (2017). Tool usage and effectiveness among L2 Spanish computer writers. Estudios de Lingüística Inglesa Aplicada, 17, 157-182.

Tripp, D. (2011). Critical incidents in teaching: developing professional judgement. Routledge. https://www.routledge.com/Critical-Incidents-in-Teaching-Classic-Edition-Developing-professional/Tripp/p/book/9780415686273

White, K. D., & Heidrich, E. (2013). Our policies, their text: German language students' strategies with and beliefs about web-based machine translation. Die Unterrichtspraxis/Teaching German, 46(2), 230-250. https://www.academia.edu/5161825/Our_Policies_their_Text_German_Language_Students_Strategies_with_and_Beliefs_about_Web-Based_Machine_Translation

Word Reference (1999). [Online Dictionaries]. https://www.wordreference.com/

Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., & Dean, J. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv.org. https://arxiv.org/abs/1609.08144

Yandex (1997). [Intelligent products and services powered by machine learning]. https://yandex.com/

Zhang, L. J., & Zhang, D. (2020). Think-aloud protocols. In J. McKinley & H. Rose (Eds.), The Routledge handbook of research methods in applied linguistics (p. 302-311). Routledge. https://www.routledgehandbooks.com/doi/10.4324/9780367824471-26

Haut de page

Annexe

Instructions: Write a mini essay (around 100 words or 10 sentences) in which you describe your favorite city in French. Please consider the following questions:

What is the name of the city?

Why is it your favorite?

What are the best places to visit in the city?

What can you do at these places?

Why might someone visit this city?

Did you ever visit the city?

Haut de page

Notes

1 See Marsden, & Mackey., 2011.

Haut de page

Table des illustrations

Titre Figure 1–Screenshots from Masha's task completion.
URL http://journals.openedition.org/alsic/docannexe/image/5705/img-1.jpg
Fichier image/jpeg, 207k
Titre Figure 2–Screenshots from Jamie's task completion.
URL http://journals.openedition.org/alsic/docannexe/image/5705/img-2.jpg
Fichier image/jpeg, 227k
Titre Figure 3–Additional screenshots from Jamie's task completion.
URL http://journals.openedition.org/alsic/docannexe/image/5705/img-3.jpg
Fichier image/jpeg, 171k
Titre Figure 4–Screenshots of Sasha's task completion.
URL http://journals.openedition.org/alsic/docannexe/image/5705/img-4.jpg
Fichier image/jpeg, 315k
Titre Figure 5–(Meta)linguistic information produced by one MT platform.
URL http://journals.openedition.org/alsic/docannexe/image/5705/img-5.jpg
Fichier image/jpeg, 222k
Haut de page

Pour citer cet article

Référence électronique

Emily A. Hellmich, « Machine Translation in Foreign Language Writing: Student Use to Guide Pedagogical Practice »Alsic [En ligne], Vol. 24, n° 1 | 2021, mis en ligne le 12 août 2021, consulté le 19 avril 2024. URL : http://journals.openedition.org/alsic/5705 ; DOI : https://doi.org/10.4000/alsic.5705

Haut de page

Auteur

Emily A. Hellmich

Emily A. Hellmich (PhD, University of California, Berkeley) is an Assistant Professor of French & Second Language Acquisition/Teaching at the University of Arizona. Her work focuses on the impact of our global, digital world on language education, and she has published in the fields of applied linguistics, CALL, and education.
Affiliation: University of Arizona, Tucson, AZ USA.
E-mail: hellmich@arizona.edu
Web: https://french.arizona.edu/people/hellmich
Address: Modern Languages 572, 1423 E University Boulevard, Tucson, AZ 85721, USA.

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search