We would like to thank Dorothy Kenny who first encouraged us in researching this topic, Roda P. Roberts for her very precious insight into the definitions of context and the history of lexicography, as well as for her advice on the methodology of the analysis, Anne Przewozny, Franck Sajous and François Maniez for their very sharp comments on the first drafts of the paper, and the reviewers for their helpful suggestions on how to enhance both the presentation and the contents of the study.
1In a recent article (2023), Bartosz Ptasznik claims that “there is no doubt that example sentences enhance a dictionary’s value by providing dictionary users with the typical context of words.” [2023: 29]. Starting from the observation that dictionary entries of current English monolingual online learners’ dictionaries now contain a “plethora of examples” due to lexicographers “extracting detailed information from language corpora” [2023: 33], he wonders how many examples users really need and how many would lead to confusion. Using monolingual learners’ dictionaries and a translation task, he undertakes a study to investigate the usefulness to language learners of examples in language production and the number of examples needed for this purpose. He concludes that examples are beneficial to that target group in encoding tasks and that a large number of examples (up to eight) does not lead to confusion.
2The user group under study in this paper is seasoned or trainee translators, who cannot exactly be considered language learners. However, even though they are not targeted as the primary user group of such dictionaries, it is evidenced by surveys that, in conjunction with a wide range of other tools now at their disposal (online term banks, CAT tools, Machine Translation, etc.), this category of users does resort to them, as there are no monolingual dictionaries aimed exclusively at translators, and as this category of dictionaries provides richer context material and far more examples than most other types (Atkins & Rundell [2008: 222, 225], Dominguez Vázquez & Gouws [2023: 8]), something which is essential to translators both in understanding a source text and in producing a target text.
3Stemming from the oft-expressed desire on the part of translators as a distinct user group for more and better examples in dictionaries (e.g., Roberts [1994]), the aim of our study is to present a systematic review of the “additional example” feature in several online monolingual English learners’ dictionaries with the translator as the main user in mind. Both from a quantitative and a qualitative point of view, we aim to assess the usefulness and usability of the “additional example” feature for translators (trainees and/or seasoned) by critically reviewing the “additional examples” found for several words in various dictionaries.
4The present article is divided into four sections. Section 1. provides the general background of the study. It first discusses the theoretical notion of “context”. Then it describes the contextual needs of translators, in particular as evidenced by a number of surveys, and it lists the resources that they can turn to in order to glean contextual information. The next two subsections focus on those resources: first on dictionaries (their use by translators and the way contextual information is encoded in them through examples, and especially additional examples), and then on other technologized resources (online term banks, corpora, data-driven dictionaries, CAT tools and MT) with which dictionaries are now inextricably bound up. Section 2. outlines the methodology of the study, consisting in a literature-based justification of the focus of the study followed by a presentation of (i) the five online monolingual English learners’ dictionaries under study, (ii) the ten words under analysis and (iii) the variables to be analyzed. The analysis proper is found in Section 3. and explores the issues of general presentation, number, format, mode of selection and relevance to translators of additional examples based on the 1,306 extra examples provided in the five dictionaries under study. Lastly, Section 4. consists in a discussion of the results, which first puts forward a number of suggestions for improvement of the “additional example” feature before suggesting a number of avenues that could be investigated to refine the analysis and to better meet the needs of translators regarding contextual information in the age of e-lexicography.
5This section first discusses the notion of “context” from a theoretical point of view (1.1.) (definition, difference between context and example, types of contextual information, types of contexts/examples), then the attention shifts to the contextual needs of translators: the next subsection (1.2.) consists in the identification of their needs based on a number of parameters and a number of existing surveys, and of the resources that enable to meet those needs. The way in which, as assessed by surveys, dictionaries – especially monolingual ones – are used by translators, and the way they provide contextual information through (additional) examples is discussed next (1.3.). Finally, Section 1.4. contains a brief analysis of the way contextual information is provided in resources other than the monolingual general-purpose dictionary which are used by translators during the translation process.
6Context is a complex, multidimensional concept that can refer to various realities and can be approached in different ways based on the discipline (Dominguez Vázquez & Gouws [2023: 2]). It is particularly central to disciplines concerned with language use, such as pragmatics (House [2006: 340]), lexicography, and translation studies (House [2006: 338]). But even within the field of linguistics, it is not understood in the same way by all scholars. For example, Frédéric François in Martinet [1969: 65-66] defines it as follows:
Le contexte d’une unité d’une certaine nature sera alors défini comme l’ensemble des unités de même nature situées à proximité et qui, par leur présence, conditionnent la présence, la forme ou la fonction de l’unité considérée.
- 1 Faber et al. [2016: 492] explain that context can range from “a few words on either side of a term” (...)
7François represents those linguists who see context as purely linguistic. Others, like the British linguist John Firth [1957] use the word “context” in a more global sense to cover both linguistic environment and extra-linguistic environment. Michael Halliday [1978] further confuses the issue by using the term context of situation to refer to the extra-linguistic environment. Still others, such as Vinay & Darbelnet [1958: 161-163], make a distinction between these two types of environment, calling the linguistic environment “context” and the extra-linguistic environement “situation”. We follow the tradition established by the latter and use the term “context” to refer to the linguistic environment. However, while most linguists limit the scope of context to the sentence level, we accept the fact that it can extend beyond the sentence1. In other words, our context resembles Bothma and Gouws’ co-text, which they define as “the textual environment of an item” [2022: 5]. This allows us to cover not only the example category found in general dictionaries but also those longer ones found in specialized dictionaries and term banks.
- 2 It would be interesting to see whether new dictionaries prepared to be disseminated online contain (...)
8Confusion about the meaning of “context” is compounded by the parallel use of the term “example”. Are the two synonymous? In practical terms they can be considered as such. They each consist of a grouping of linguistic elements that shed light on one of those elements. While “example” is used to designate such groupings in a dictionary and “context” is used to denote this reality in term banks, they are similar in both form and function. Contexts in term banks, though, tend to be longer than examples in dictionaries, no doubt because the former were published online and were therefore not subject to the space limitations imposed on printed dictionaries2. Whatever their length, both examples and contexts provide contextual information.
9Contextual information can be of many different kinds. We present three main types below, using the word explosion as an illustration3:
10(i) Semantic information, i.e., information about the meaning of a word
e.g., Three people have been killed in a bomb explosion in northwest Spain.
e.g., It was an explosion of anger against the practices of the occupying forces.
Killed and bomb illustrate one common sense of explosion, that of a violent burst of energy caused by a device such as a bomb, while anger in the second example shows how explosion can be followed by an abstract word to express a sudden violent expression of someone’s feelings.
11(ii) Collocational information, i.e., the words with which a linguistic element is commonly used
e.g., They warned him that a referendum might cause an explosion in the country.
e.g., ...the explosion of protest and violence sparked off by the killing of seven workers.
Cause an explosion or spark off an explosion are collocations.
12(iii) Syntactic information, i.e., how a linguistic element combines with others
e.g., The spread of the suburbs has triggered a population explosion among America’s deer.
This example illustrates in which syntactic pattern explosion is used: it is the direct object of the verb triggered and is followed by the preposition among.
13In the literature, both contexts and examples have been classified according to type depending on a) the purpose of the context/example, b) the type of contextual information presented and c) the amount of information provided. We will look briefly at three classifications: (i) Ptasznik’s for examples [2023], and those of (ii) Dubuc [1997] and (iii) Auger & Rousseau [1978] for contexts.
14Ptasznik [2023: 29] divides examples according to their purpose into decoding examples and encoding examples: the former, most necessary for lesser frequency items would contain more semantic information, whereas the latter would contain collocational or syntactic information.
15Both Dubuc and Auger & Rousseau have context categories that correspond to Ptasznik’s decoding example. Thus, Dubuc [1997: 72-73] proposes the following classification: defining, explanatory and associative “depending on the quality and quantity of semantic features”. A defining context contains the term and sufficient semantic features to provide a clear idea of its meaning. It does not necessarily constitute a formal definition. An explanatory context describes some characteristics of a concept without necessarily defining it (e.g., for the term shed, a context that explains how a shed is formed, without actually defining it). Finally, an associative context does not contain any semantic features; it simply enables the terminologist to link a term to the subject field through its association with the terms around it.
Dubuc’s first two classes of context above serve a decoding purpose. The third could provide some linguistic information, or could simply attest to the existence of the term.
16The Auger & Rousseau classification [1978: 34-36] also contains three categories: defining/descriptive, metalinguistic and attesting. The defining or descriptive context presents the semantic features of the term and shows the term in the context of other related terms. A metalinguistic context is a linguistic comment on the term and its usage. An attesting context provides little information other than attesting to the existence of the term. In this classification, the defining context corresponds to a decoding example. However, there is little support for encoding provided by any type of context enumerated.
- 4 See also Josselin-Leray et al. [2014: 633] and Picton et al. [2018: 112] on that topic.
- 5 For a more detailed description of the various tasks involved in the translation process, see for e (...)
- 6 Based on a broader definition of context such as the ones mentioned in Section 1.1.1., Bowker [2011 (...)
- 7 Our emphasis.
17The translation problems that translators face are either source text-related (for comprehension of the source text) or target text-related (for transfer into the target text)5. All three types of contextual information as defined above (see Section 1.1.3.), i.e., semantic, collocational and syntactic information, are particularly useful to help them solve those problems. Contexts that contain semantic information are useful in the source language, to help the translator understand the text he/she is translating, but also in the target language to make sure the meaning has been accurately conveyed through translation. Regarding transfer into the target text, Roberts [1994: 55-56] states that what translators need is help on how to select the most appropriate equivalent for their needs, that it to say information on the context in which different equivalents can be used. She adds [1994: 56] that translators also need help, once they have picked the right equivalent, to “integrate it properly into a sentence”. That kind of help can be provided by contexts which contain syntactic information. Finally, with idiomaticity in mind, Bowker [2011: 214] insists that what translators “really want [is] usage information – such as contexts or collocations – [i.e., collocational information] which will help them to produce a target text that reads well”6. More generally, Varantola [1994: 607] underlines that “the main problem for the translators, in addition to finding answers to his/her queries, is how to be sure that the decision they make, on the basis on the information provided [by the various sources consulted], is correct and acceptable for the context7”. As shown in Belleflamme [2019], examples of usage in dictionaries provide an excellent means of helping the translator in this respect.
18The need for a given type of contextual information may of course vary based on the translator’s profile: level of training (e.g., trainee vs. seasoned), degree of familiarity with the domain or genre, level of proficiency in the languages involved…, on the type of text translated, and on the directionality of the translation (L1 into L2, L2 into L1, L3 into L2, etc.).
19This subjective identification of translator needs has been confirmed by various surveys on several types of dictionaries (monolingual, bilingual, general-purpose, specialized…). A survey of this target group by the Bilingual Canadian Dictionary Project back in the 1990s found that at least a quarter of the subjects surveyed felt that improvement was required both in the number and in the variety of examples presented in bilingual dictionaries (Roberts [1994: 56-57]). One of our previous studies (Josselin-Leray [2005]) reached a similar conclusion: although users were overall satisfied by the examples provided by their dictionaries (between 41.3% and 67.5% of users said they were satisfied), the level of satisfaction was lower among the “language professionals” user group, among which one can find translators. In a more recent survey carried out by Duran-Muñoz in 2010 whose findings were summed up in Duran-Muñoz [2012] and which dealt mostly with specialized lexicographical resources, translation professionals (translators, terminologists, project managers, subtitlers) gave their opinions regarding their needs; the second most repeated argument was that terminological resources should “include more pragmatic information about usage and tricky translations”, and the fifth one was that they should “provide examples taken from real texts”. An even more recent survey by León Araúz et al. [2019: 136] among 39 trainee translators who were asked about their expectations regarding terminological resources for translation revealed that context and usage examples, phraseology and collocations, and access to corpora were the most relevant data categories. Finally, we can quote a survey regarding user satisfaction of the European multilingual term bank IATE which was carried out in 2022 among external users and involved 943 respondents (80% of whom were translators). The results, which have just been released8, show that the very first proposal that was mentioned by the respondents to improve the usefulness of results was: “seeing terms in context and with their definition (within IATE or with a link to other EU resources like EUR-Lex” (a parallel corpus which is a collection of multilingual corpora in all the official languages of the European Union).
20All these findings justify the need for a study of contextual information as it presented in dictionaries (and other tools) used by translators.
21In a very recent paper, Dominguez Vázquez & Gouws [2023: 8-10] indicate that to obtain “at least some context information from a monolingual and/or bilingual perspective”, “a typical user (i.e., without (meta)lexicographic knowledge) can handle resources of different types”. They list the following resources: (i) a variety of (electronic) dictionaries, (ii) corpus-analysis tools, (iii) machine translation engines and tools, (iv) reading and writing assistants, (v) tools and programs that automatically generate various text types, literary sentences, texts from images and vice versa, and, in some cases, jokes, poems or stories. When it comes to a specific type of user, the translator, who does have some (meta)lexicographic knowledge and a rather advanced level of proficiency in both source and target language, this list has to be adapted. Categories (iv) and (v) are not relevant for our study as they refer to types of resources that are hardly – if ever – used by translators. Dictionaries, however, – and, among them, monolingual dictionaries – are essential tools found in the trainee or the professional translator’s toolbox, along with additional resources that are now an essential part of the translator’s increasingly technologized environment, such as online term banks, data-driven dictionaries (such as Linguee) and Computer-Assisted Translation (CAT) tools (also known as Translation Memories or TMs), and, to some extent, Machine Translation (MT) tools. We will first focus on (monolingual) dictionaries and the way contextual information is conveyed through examples, and particularly “additional examples” (Section 1.3.), but will also present other resources that translators can query for contextual information in Section 1.4. since, as will be shown in that section, the boundary between dictionaries and those other resources is now becoming blurred.
- 9 As noted by Künzli [2001: 520], this statement may be qualified, though, by the fact that the trans (...)
- 10 We may also add that translators also (often?) use a bilingual dictionary and a monolingual one whi (...)
22Despite the fact that dictionaries traditionally only or mostly provide context-free examples, i.e., examples that are perceived as prototypical and frequent, while what the translators need to find the suitable equivalent(s) is typically context-bound (Atkins [2007], Varantola [1994: 607]), dictionaries have always been one of the main resources that translators turn to in order to get contextual information, a trend that has undoubtedly been favored by the shift from context-free to context-sensitive dictionaries (Rundell [2018: 6], Varantola [2006: 217]). When studying dictionary use in translation, Varantola [2002: 34] established that one of the few regular patterns of behavior that emerged was that “dictionary users resort to dictionaries to solve a context-dependent problem”. One of the main findings of Künzli’s well-known empirical study conducted in 2001 about the use of information sources during the translation process, which involved three trainee translators and three professional translators, was that the bilingual dictionary was the tool that was most commonly used by the participants [2001: 513] and that the monolingual dictionary ranked second. Englund Dimitrova & Jonasson [1997], quoted in Künzli [2001: 519] also showed that the monolingual dictionary was very much used by professional translators9. Ten years later, in 2007, thanks to an empirical study examining the information behavior of 19 professional translators, Domas White, Matteson & Abels [2007: 587] showed that the translators’ primary sources were still dictionaries, with the participants relying “heavily on both monolingual and bilingual dictionaries in their work”. Finally, Buendía-Castro’s very recent study of dictionary use by trainee translators at the University of Granada, Spain, found that 63.2% of the students surveyed used dictionaries very often. The results varied, though, depending on the academic year they were in, with the frequency of use increasing in line with their academic level (Buendía-Castro [2023: 5]), and on the type of dictionary: the most common type of dictionary used by trainee translators was “the electronic bilingual general dictionary (68.7%), followed by the electronic monolingual general dictionary in the students’ L2 (62.2%), and the electronic monolingual general dictionary in the students’ L1 (38.3%)” (Buendía-Castro [2023: 15])10.
- 11 In 2019 and 2020, we carried out a survey that included 31 questions on the use of monolingual and (...)
- 12 Examples of such dictionaries are the following. For French dictionaries: Dictionnaire Le Petit Rob (...)
- 13 This phenomenon has been little studied so far. What has been studied in further depth is the behav (...)
23It cannot be denied that the patterns of use of dictionaries by trainee and professional translators have changed over the last fifteen years. Needless to say, most dictionaries being consulted by translators now are online dictionaries (Durán Muñoz [2012: 81], even though a few paper dictionaries are still being used, e.g., specialized dictionaries. As shown in Josselin-Leray [2021], an emerging trend about trainee translators is that fewer and fewer of them own paper dictionaries, if any (which is in keeping with Buendía-Castro’s findings [2023: 5, 15]), and that the online dictionaries they tend to favor are almost exclusively free dictionaries11, even though some more appropriate advanced dictionaries are available, but require a subscription fee12. Another trend among trainee translators that we highlighted in Josselin-Leray & Rossi [2022] is that the boundary between various tools (i.e., dictionaries, specific dictionaries such as Linguee, CAT Tools, Machine Translation) now seems to be blurred among trainee translators13, which is in keeping with what L’Homme & Cormier [2014: 335] had already noticed about “the traditional dividing line between dictionaries and other kinds of resources [being] more and more difficult to draw”. This finding accounts for the fact that we deemed it necessary to include resources other than dictionaries in our analysis (Section 1.4.). However, in our view, the fact that the various translation resources and platforms now found online are “dramatically changing the way that information is obtained” (Buendía-Castro [2023: 1]) is not a good enough reason to underestimate the role that dictionaries still play. We strongly believe, just like Buendía-Castro [2023: 1], that “[m]ore than ever, the role of dictionaries should be highlighted”.
24Having stated the rationale for our study, we now focus on the way contextual information is encoded in dictionaries through examples, and in particular additional examples.
- 14 A broader perspective could also include other types of Multi Word Expressions such as idioms and p (...)
- 15 If we give a broad definition to context, other dictionary components can be considered as conveyin (...)
- 16 For further reading on collocations and other types of phraseological units, see for instance Grang (...)
25Contextual information can be provided in dictionaries through many different components, some of which are specific to a given type of dictionary. For monolingual dictionaries, the three main types of components that correspond to our definition of context as presented in Section 1.1.1. are (i) sense indicators or meaning discriminators (Atkins & Rundell [2008: 214-216]), (ii) collocations14, and, what interests us most, (iii) examples15. In the most recent learners’ dictionaries, collocations have been given particular attention and salience. They are usually distinguished typographically (by color or bold print) or organizationally (by being grouped into boxes) (Dziemianko [2014]) 16. However, collocational information might also be present in another component, examples. Examples in dictionaries are indeed one of the most straightforward ways to introduce contextual information. According to Atkins & Rundell [2008: 453-455], three main functions can be assigned to examples in dictionaries, the last two being the most relevant for that purpose:
(i) “Attestation”: a quotation is used to prove the ‘bare existence of words’, as Johnson put it. This reminds us, of course, of the attesting context that we mentioned in Section 1.1.4.
(ii) “Elucidating meaning”: examples illustrate usage, and can act as a “helpful complement to the definition”, by clarifying meaning distinctions in a polysemous word. This function may encompass the defining and explanatory contexts as defined by Dubuc (see Section 1.1.4.) and shows how semantic information can be conveyed through examples.
(iii) “Illustrating contextual features: syntax, collocation, register etc.”. Atkins & Rundell [2008: 454] insist that “examples have an important role in illustrating the word’s contextual range”, something which is particularly important in dictionaries for learners. Examples provide syntactic information as they can “back up any statement about a word’s syntactic behaviour with an example that instantiates the pattern” [2008: 454]. They also provide collocational information as they are supposed to “show the word in one of its frequent collocational pairings”. Monolingual learners’ dictionaries, in particular, “use examples to provide a full account of a headword’s collocational behavior”. Finally, as they show a word in its natural setting, examples can show if it is “marked for style, register or regional distribution”.
26Another crucial function of examples is that they “provide models for language production” (Rundell [2015: 318]).
27Within the microstructure of monolingual dictionary entries, examples are traditionally nested at sense level right after a definition (Ptasznik [2023: 33]), and their number is usually limited, with an average of two to three.
28In addition to those “traditionally” presented examples, most monolingual dictionaries online now provide “additional examples” which, as will be thoroughly analyzed in Section 3., come under various names (“extra examples”, “more examples”, etc.), may be presented in different ways – e.g., they can be found within the dictionary entry or in an extra section –, may be automatically or optionally displayed, may be taken from specific corpora or from online sources, may have been edited or not, etc.
- 17 A well-known exception in English lexicography is Johnson’s Dictionary (1755), which was the first (...)
- 18 A famous example that was particularly criticized for being unnatural and over-contextualized was t (...)
29Why and when did monolingual online dictionaries introduce this new feature? For a very long time, examples were simply invented by lexicographers17. As noted by Potter [1998: 357], before the first edition of the COBUILD was published in 1987, the examples found in (learner’s) dictionaries were almost exclusively made up, “with the exception of occasional citations from magazines and newspapers”. With the advent of corpus linguistics, and consequently long before dictionaries went online, lexicographers started to follow COBUILD’s innovation of “using all real examples taken directly from a corpus as the basis, though not necessarily as the source, for their examples”, which greatly improved the usefulness and the naturalness of the examples – pre-corpus examples were often awkwardly over-contextualized18 or too decontextualized (Atkins & Rundell [2008: 457]). The older model of invented, often truncated examples thus gave way to “the use of authentic examples in the form of complete sentences taken from a corpus” (Rundell [2015: 318]). In the late 1990s, apart from a limited number of scholars (e.g., Laufer [1992], Nesi [1996]), it was thus generally agreed that examples taken directly from a corpus with little or no modification were as useful or more useful to users than fully invented examples.
30Going one step further, as stated by De Schryver’s [2003: 167] and Pastor & Alcina [2010: 313], a number of scholars, starting in the 1990s, “expressed the wish to offer users of E[lectronic] D[ictionarie]s the same wealth of information lexicographers find in corpora, in other words, to include a corpus cum query tools as integral parts of an ED”. From then on, the dividing line between traditional dictionaries, corpora and other tools grew fuzzier and fuzzier. After showing in her 1994 study how a “well-balanced and versatile” corpus would be a welcome help in the translators’ decision-making process, “in addition to19 traditional dictionaries and dictionary information”, Varantola went one step beyond in 2002 and suggested that:
A passive dictionary could also be “activated” by giving the user the possibility to access relevant corpus data – e.g., concordance lines from target-language corpora for the potential translation equivalents given in the entry. [2002: 37]
31Cobb [2003] also advocated “a blend of dictionary and concordance” for language learners. About a decade later, when critically reviewing the terminological tools at the translators’ disposal, whose shortcomings regarding contextual information she highlighted, Bowker [2011: 215] made a similar suggestion. She showed how useful it could be for translators to generate concordances or frequency lists ‘on the fly’ [2011: 227-228]. In fact, a limited number of dictionaries, particularly learners’ dictionaries, started offering that option, which was one of the most innovative possibilities offered by the CD-ROM format [Nesi 1999: 62]. We will have a quick look at two of them, as the way they were devised and implemented has undoubtedly influenced the way the additional example feature is currently implemented in online dictionaries: (i) COBUILD and (ii) LDOCE.
- 20 The Word Bank in Collins COBUILD on CD-ROM is not to be confused with the Bank of English, a much l (...)
- 21 This was the case for the prototype, but also for the final version, as explained by Varantola [199 (...)
- 22 In his review, Rosszell [2003] noted that the following number of additional corpus examples were p (...)
32(i) COBUILD
A very pioneering attempt was the second edition of COBUILD (1995, CD-ROM), which contained a 5-million-word sample from the Word Bank Corpus20 (Lew [2010], Varantola [1994], Nesi [1999]). However, the 1995 Word Bank did not contain a proper concordancer (De Schryver [2003: 169])21, but only an Advanced Search feature, and a number of problems were listed by Nesi [1999: 62]. Access to the corpus actually took the form of access to numerous examples. For instance, for the headword vote (verb), the CD-ROM gave access to 373 examples22. Those examples were always in full-sentence form (as opposed to KWIC concordances). According to Lew [2010: 295], because of “the rather poor integration between the main text of the dictionary and the corpus sample, or perhaps owing to the disappointing size of the corpus itself”, this innovation was not very well received, and did not trigger “a boom of corpora integrated in E[lectronic] D[ictionaries] from then onwards” as might have been expected (De Schryver [2003: 169]). In a later edition of the COBUILD, access to the corpus improved; the Word Bank feature, this time, according to Paquot [2012: 166], functioned as a concordancer: users were able to search for single words but also phrases in the corpus, as shown in Paquot [2012: 167]. In the 2009 edition of the COBUILD, the Word Bank tool disappeared altogether [Paquot 2012: 167].
33(ii) LDOCE
According to Asmussen [2014: 1082], it was the 4th edition of LDOCE (2003) that first introduced the “Examples Bank” feature in the dictionary. The users could find additional examples of a “certain use” of a headword, which could be displayed both as KWIC concordances and as a listing of plain examples. A striking fact, though, is that the examples were not randomly sampled from the corpus, but “were chosen by applying ‘linguistic filters’ (which were not described). In that case the corpus was only used as “a repository of additional examples of how to use words and expressions”, but was not meant to be a stand-alone tool. Paquot [2012: 168-169] examined the next edition (LDOCE 5, 2009) and noticed that access data was offered in the microstructure of the dictionary. An example button on the right of the main entry allowed the users to access other dictionary examples, and also examples from the corpus (a sample of one million sentences taken from the Longman Corpus Network). Unlike COBUILD 5, the corpus was POS-tagged and there was no concordancing facility, which meant the corpus could not be used as a stand-alone tool.
- 23 “Ideally, users should be granted access to additional corpus examples from all the relevant points (...)
- 24 The Digitale Wörterbuch der Deutschen Sprache allowed users to see concordances from several corpor (...)
34As underlined by Rundell [2015: 318], a hard fact is that from the 1990s, the provision of additional examples became a common feature of learners’ dictionaries published on optical disks. Even though the examples would (mostly) be taken from a corpus, he noticed two main shortcomings: (i) in most cases, there was “little or no filtering (for quality, appropriacy, etc.) and (ii) worse, examples for polysemous words were not mapped to specific senses. When dictionaries went online, the space constraints disappeared, and publishers experimented with new ways of providing access to concordances taken from corpora and larger numbers of examples, which was an answer to the wish expressed by Lew [2010: 295]23. This was the case for some German, Dutch and Danish general-purpose monolingual dictionaries24, and also for some specialized resources (see Section 1.4.1.), but especially for English learners’ dictionaries. Rundell [2015: 319] mentions the case of the Oxford Dictionaries online, in which the user, in 2015, found “one or two examples at most words or senses” but had the option of clicking on a “More example sentences” link that brought up, “typically” three further corpus-derived examples. A significant improvement over the above-mentioned CD-ROM examples of COBUILD and LDOCE is that the extra examples were mapped to the meaning that they instantiate.
35In 2023, the additional example feature, which started as a pioneering attempt by the COBUILD2, can be considered a “strong trend [in] English monolingual learners’ dictionaries which have been made available to us online” [Ptasznik 2023: 30], which justifies our study. However, dictionaries cannot be regarded as “stand-alone tools that should provide all the answers that the users need about language in use” (Varantola [2002: 34]), especially in the case of translators who rely on a combination of tools which tend to merge more and more, which is why we deem it necessary to devote a whole subsection to them.
- 25 Wang & Lim [2017: 63] also add “social networking and cloud technology” to the list of “web-based a (...)
- 26 https://www.juremy.com/about/, accessed on October 20, 2023.
36Conventional general-purpose monolingual or bilingual dictionaries are used by translators in conjunction with several other tools: (i) specialized dictionaries and term banks (ii) corpora and corpus-analysis tools, (iii) data-driven dictionaries (e.g., Linguee), (iv) CAT tools and (v) Machine Translation (MT) tools25. For the purpose of the analysis, we present these tools separately, but the boundary between them is now increasingly blurred: Linguee is, in a way, half-way between a dictionary and a corpus, and is now found on the same website as an MT tool, DeepL; specialized glossaries are part of CAT tools and can now be integrated into MT engines; MT engines are now found within all popular CAT tools (e.g., Language Weaver for RWS). We can also mention here a tool which embodies that tendency: Juremy.com26 is a tool which used to be exclusively restricted to EU translators specialized in legal translation and which has now been made available to a wider audience. This tool is an online concordance search tool available in all combinations of the 24 EU languages which can be used to retrieve contextual information (term and phrase matches) both from a corpus (EUR-Lex) and the European Union multilingual term bank IATE. We will start our overview of those tools with term banks.
37In this paper, we have deliberately decided not to focus on specialized dictionaries and term banks. This choice may sound debatable: it could be argued that, as most of the professional translation takes place in specialized fields of knowledge (e.g., technical translation), translators make use of specialized terminographic resources such as term banks, much more often than general-purpose dictionaries. However, three main reasons account for our decision. First, various surveys, as we show in Section 1.3.1., have confirmed that translators still rely heavily on dictionaries and Bowker [2012: 382-384] has very convincingly described that what they actually use is in fact a combination of general and specialized tools, a behavior which is possibly more characteristic of trainee translators. In the experiments led by León-Araúz et al. [2019: 145] – a translation task carried out within a specific workstation, the participants, who were trainee translators, looked up specialized terms (estuary, storm-induced, detached breakwaters and soft cliffs) in general resources such as Cambridge dictionary. Second, not all professional translators translate heavily specialized texts. The information subtitlers need, for instance, has to do with everyday language (plus slang and colloquialisms, which are more likely to be found in (large) general dictionaries). Others include, for example, videogame localizers and people who translate biographies or general non-fiction. Third, and most importantly, there is so much information worth investigating that the topic is worth a separate paper (Josselin-Leray, underway). As a consequence, we will only briefly mention a few prominent elements here.
38There have been some attempts at so-called “contextual” specialized dictionaries – one can mention, in the field of earth sciences, the Dictionnaire contextuel de français pour les sciences de la Terre (Descamps [1973], which was not targeted at translators, or the Dictionnaire Contextuel anglais-français de la chromatographie, a bilingual dictionary explicitly aimed at translators (Serré [1981]), but these have remained isolated. Traditionally, contextual information is encoded in the “context” field of term records in online term banks, but it often fails to meet the translator’s needs for two main reasons: (i) the number of contexts provided – if any – is very limited and very often comes down to one single context (Bowker [2011: 214], Francœur [2015]), (ii) the type of contextual information included and the way the contexts have been picked is often unclear (Josselin-Leray [2018]).
39Two examples of specialized resources that provide a feature similar to that of “additional examples” in dictionaries through access to corpora are mentioned by Pastor & Alcina [2010: 316]. One is a computing science dictionary, the DicoInfo, compiled at the University of Montreal27. Each term record contains a contextual field in which the user can access concordances extracted from a corpus, with their references mentioned. For some terms, the contexts are annotated, and provide information on semantic roles. However, there is no proper concordancing function. The second example is the EOHS Term online database, whose website (http://eohsterm.org) is now shut, which was compiled at the University of Bologna with translators in mind (Castagnoli [2008]). In that database, the context field was deleted from the entries, and the terms were linked to a reference corpus where the user could access the contexts in the form of KWIC concordances and copy and paste them. A third, more contemporary, example, is EcoLexicon28, developed at the University of Granada, which provides concordances from the EcoLexicon corpus, through another tool, SketchEngine. Because the corpus is tagged with metadata that contain information about the language of the text, the author, date of publication, target reader, contextual domain, keywords, etc., corpus queries can be based on pragmatic factors, and users can compare the use of the same term in different contexts, e.g., the use of sediment in Environmental engineering texts vs. its use in an Oceanography context, or the use of sand by experts vs. laymen (León Araúz et al. [2019: 77]). Finally, Delavigne [2023: 133-136] also has interesting suggestions on how to integrate corpora into specialized resources from a socioterminological point of view. But what about the use of corpora themselves?
- 29 Bernardini’s study makes a clear difference between the usefulness of corpora for translation schol (...)
40Given the relevance of corpus evidence to analyze meaning in context (Rundell [2018]) and for the retrieval of collocational and syntactic information, corpora could be considered as one of the most useful tools for translators. A very large number of studies (e.g., Bowker [1998], Josselin-Leray [2005: 524-529], Kübler & Aston [2010], Loock [2016], Bernardini [2022]) have shown in which ways and at which stage of the translation process various types of corpora can be helpful29. However, the fact remains that corpora are still underused by translators.
41Verplaetse & Lambrechts [2019] have very clearly summed up the findings of previous surveys in that respect. The MeLLANGE survey, in 2006, showed that only 41.8% of respondents used corpora when translating. More recent surveys, one in Spain, with 526 Spanish respondents (Gallego-Hernández [2015]) and one in Switzerland (Picton et al. [2015]), with 202 respondents, showed that the rate of translators using corpora has only slowly increased: “nearly 50% and even 70% of respondents […] use corpora sometimes, often or very often” (Verplaetse & Lambrechts [2019: 6]). The global survey conducted in 2015 by Zaretskaya et al. [2018] pointed to a very low adoption of corpora as translation aid by translators (the majority of participants – 85% of the 736 participants from 88 countries, had never worked with textual corpora). Finally, Verplaetse & Lambrechts’s own small-scale survey, conducted among 101 Belgian and Dutch professional translators in 2017-2018, only confirmed previous surveys: only 48 respondents (vs. 53) said they used corpora when translating. Another finding was that in-house translators seem to adopt corpus use more readily than freelance translators.
42Not only are translators not very familiar with corpora in general, but they are not familiar with corpus-analysis tools in particular. Zaretskaya et al.’s study showed that “one type of technologies, concordance systems, was completely unknown to a great number of translators who replied to [the] question” [2018: 45].
- 30 https://www.english-corpora.org/, accessed on July 21, 2023.
- 31 For example, there is now a dropdown list to search for particular parts of speech, while there use (...)
- 32 https://www.sketchengine.eu/, accessed on July 21, 2023.
- 33 https://www.sketchengine.eu/#blue, accessed on July 21, 2023.
- 34 Concordances are actually presented as being “examples of use in context” (https://www.sketchengine (...)
- 35 This feature “processes the word’s collocates and other words in its surroundings. […]. The results (...)
- 36 More specifically, it allows users to access the following 16 pieces of information: “1) lemma and (...)
- 37 The 2022 version of the EMT competence framework now clearly highlights corpora in Competence #16: (...)
43This situation sounds paradoxical for two main reasons. First, over the past decade, corpus-querying tools (especially concordancers) have been much simplified: for example, the interface for English Corpora30, developed by Mark Davies, is now much more user-friendly than it was when first created for the COCA and BNC corpora31. Tools such as Sketch Engine32 have been popularized: while they were originally tools almost exclusively designed for and used by lexicographers, they are now aimed at a wide array of users – Sketch Engine explicitly names “linguists and lexicographers, translators, terminologists, text analysts, people involved in product naming, teachers and students, historians”33 and are particularly user-friendly. Sketch Engine also now includes a very detailed user guide with tutorials and a glossary that contains core concepts in corpus linguistics (alignment, query, regular expression, subcorpus…). Finding examples of use or examples in context, for both words and phrases, is presented as one of the most prominent and easy-to-use features of the tool34, especially thanks to Word Sketch, which is “a one-page summary of [a] word’s grammatical and collocational behavior”35 (but which used to be exclusively used by lexicographers). The corpus tool English Corpora devised by Mark Davies also now includes a feature called Word Sketch which is very easily accessible and provides a wealth of information for each of the top 60,000 words of a given corpus36. Second, as stated by Bowker [2004: 225], the use of various corpora in translator training programs “has been well established since the late 1990s”, a trend that has been reinforced in Europe for instance in universities that are members of the European Master’s in Translation network, as evidenced by the results of a survey conducted by Rothwell & Svoboda [2019]37. Nevertheless, Bowker’s prediction [2004: 237] regarding the rising number of technologically-trained graduates does not seem to be confirmed yet.
- 38 As shown by the rising number of corpus-related continuing education training modules for professio (...)
44Despite the simplification and the popularization of these tools, and the fact that more and more professional translators are asking to be trained to use them38, the fact remains that the relationship between professional translators and corpora is still complicated and that corpora and corpus tools are still partly viewed by translators as tools mainly aimed at researchers (Mikhailov [2021]) or considered as “stuff for linguists” (Loock [2023]). In fact, translators resort much more readily to data-driven dictionaries and CAT tools, which we now present.
- 39 https://www.linguee.fr/, accessed on August 4, 2023.
- 40 According to Schmidhofer & Mair [2018], “its beginnings date back to the year 2007, when Gereon Fra (...)
45Linguee39, which was released about a decade ago40, is now massively used for translation, in particular by trainee translators (Josselin-Leray [2021], León Araúz et al. [2019: 137], Kübler [2013: 5]) or young graduates (Frérot & Karagouch [2016: 15]), but also by professional translators (Kilgarriff [2022: 84]), and one of the main reasons for this massive use is that it shows words in context (Buyse & Verlinde [2013: 509]):
Linguee […] invites its users, more than traditional (on or off line) dictionaries do, to look for the correct words and their correct form and combination in context: when one looks up a translation in the English-Spanish dictionary of Linguee, a drop down list pops up with different combinations of the word (collocations, prepositional phrases…) and after selecting one of these possibilities all occurrences of that combination in a large translation corpus are listed in a two-column format in context.
- 41 Zetzsche [2020: 171] explains that, in 2012, Linguee had launched two fee-paying models, Linguee Pr (...)
- 42 According to Durán Muñoz [2012: 83], “the example sentences come from bilingual websites, particu (...)
- 43 Kübler [2013: 10-12] very explicitly warns against the use of Linguee as the sole resource for tran (...)
46However, the constantly changing nature of this tool makes it hard to define. As we have shown in Josselin-Leray & Rossi [2022], this tool is presented by its designers as a “dictionary”, which allows to “search 1,000,000,000 translations”, but it is neither an advanced dictionary for translators, nor a proper concordancer, nor even a proper translation-memory tool as defined in Section 1.4.4. In 2012, Durán Muñoz presented it as a “sophisticated bilingual search engine” [2012: 83], Verplaetse & Lambrechts [2019: 17] call it “an example of parallel corpora on the web”, while Resende & Way [2021] call it “a confused MT system with dictionaries and aligned corpora”. Even though seeing a word or a phrase in context in that tool may undeniably be useful to translators, we noticed (Josselin-Leray & Rossi [2022]) two main types of shortcomings. Regarding the functionalities of the tool, (i) the result of a query is a sort of fuzzy highlighting and there is no KWIC format, (ii) there is no indication regarding frequency, (iii) the format of the extracts is necessarily a sentence, and the context cannot be slightly expanded, i.e. the user cannot see what immediately precedes or follows the sentence; one only has access to the whole document – when it is available – from which the context is taken, (iv) there is no possibility for fine-tuned queries, unlike a proper concordancer41. Regarding the details of the underlying corpora, we have identified two main flaws: (i) information on the size of the corpora and the way they have been compiled is not provided42, (ii) the direction of the translation is not mentioned, leaving the user to wonder which text is the source text and which is the target text, as was already noted by Kübler a decade ago [2013: 2, 10]43. We may also add that the quality of the “translations” provided is sometimes questionable.
47Since the 1990s, Computer-Assisted Translation tools such as RWS (formerly known as SDL Studio), MemoQ, DéjàVu, OmegaT, Wordfast or MateCat have been increasingly popular among professional translators, be they in-house or freelance translators, and are also used by trainee translators (Kenny [2020]). The study by Picton et al. 2015, for instance, shows a proportion of 82% of CAT tool users, and Verplaetse & Lambrechts [2019: 16] found that a vast majority of their respondents (80%) used CAT tools. The findings of the European Language Industry Survey (ELIS) report [2023]44 show a similar trend, with CAT tools being implemented in over 90% of the language industry businesses surveyed.
48Doherty [2016: 950] gives the following definition of a CAT tool, often referred to as a Translation Memory (TM), “a software program that stores a translator’s translated text alongside its original source text, so that these pairs can later be reused in full or in part when the translator is tasked with translating texts of a similar linguistic composition.” He adds ([2016: 950]): “With these suggested matches, the translator can assess their quality and contextual appropriateness45 and use them in full or in part by editing (e.g., additions, deletions, substitutions)”.
- 46 As Bundgaard [2017: 126] puts it, “A TM is basically a database of paired source and target texts d (...)
- 47 Our emphasis.
49On the face of it, translation memories can appear as efficient tools to help translators with context-related issues: they are founded on the very concept of “context matching”, performed either by the tool itself46, or by the user. Most CAT tools indeed now have a specific concordance feature – a rather tricky denomination (called “context match” in SDL Trados, Killman [2015]) – which ‘allows translators to search for and retrieve segments below sentence level (called sub-sentence matching from the TM database)” (Bundgaard & Christensen [2019: 17]). With this feature, translators can type or paste a word or a string of words in a search window and “are then presented with a list of translation units from the TM (source and target segments in which the word or string of words searched for are highlighted)” [2019: 17]. This feature is extremely popular among translators: Zaratskaya et al.’s study found that about a quarter of the professional translators surveyed (103 out of 403) mentioned concordance search as one of their favorite features in CAT tools. Bundgaard & Christensen’s workplace study [2019] among eight Danish professional in-house translators who were asked to perform a post-editing task (editing both TM segments and segments that had been machine translated) showed that the concordance search was the preferred resource for all translators [2019: 30], which has led the authors to ask whether this feature was “the new black”, and to go as far as asserting that this feature seems to have replaced bilingual dictionaries as professional translators’ “go-to guy”. Those findings sound paradoxical, as underlined by Zaratskaya et al. [2018: 51]. In fact, most professional translators still do not seem to make a clear distinction between a corpus and a translation memory or CAT tool, which is confirmed by the survey carried out by Verplaetse & Lambrechts [2019: 16-17]. This confusion might be accounted for by the number of similarities that those two tools share, as we have shown in Josselin-Leray & Rossi [2019]. However, the fact that translators may not be aware of the differences between a corpus and a TM and that corpora clearly are more efficient tools regarding context is clearly an issue, as implied by Verplaetse & Lambrechts [2019: 17]47.
50Moreover, context can be considered as the “Achilles’ heel of translation technologies” and of translation memories, as stated by Killman [2015], a problem for which it is mostly segmentation which is to blame: these tools do not consider co-text beyond the sentence. Killman’s study has shown in detail that TM tools may suggest matches that are in some way semantically or lexically undesirable, even though they ‘match’ from a technical point of view (Killman [2015: 212]), and that TMs entail “contextual blind spots” [2015: 221]. The overall change of paradigm from a linear to a segmental non-linear approach in translation in TM tools has even led some scholars to theorize on the gradual disappearance of the source text as a complete textual unit (Bédard [2000]). Nevertheless, Jimenez-Crespo [2017: 162] qualifies this statement by saying that “nowadays it is clear that this lack of context and co-text can affect differently professionals and non-professionals”. He argues, referring to a previous study (Jimenez-Crespo [2013: 54]), that
from a cognitive perspective, professionals, through prior accumulated experience with similar texts or by constructing a mental model of what the texts might be, consciously or unconsciously can possess a model of the global text that compensates for this potential lack of communicative context or co-text. (Jimenez-Crespo [2017: 162])
- 48 See Bawden [2018] for a thorough state of the art about contextual machine translation.
51To conclude this part on the various tools that can enable translators to access contextual information, one last tool needs to be mentioned, Machine Translation, as it has been gaining popularity among translators since the advent of Neural Machine Translation (Zetzsche [2023: 124]; ELIS survey [2023: 37]), which can be traced back to 2016. However, translators do not exactly retrieve contextual information from Machine Translation in the same way as they do from the tools previously mentioned, which is why we will only briefly analyze it here. Let us just mention that, just like for TMs but for slightly different reasons, context has also long been considered as the “Achilles’ heel” of MT, to paraphrase Killman’s expression [2015]. The main weaknesses of Machine Translation in terms of context that he identified in his paper, which was written right before the advent of NMT, were “non-text” (e.g., world knowledge) and “ambiguous co-text” [2015: 209]. While huge progress has been made in recent years with NMT, the majority of MT systems still rely on the assumption that sentences can be translated in isolation, which proves to be a serious issue for context-dependent phenomena such as anaphoric pronoun translation, lexical disambiguation, lexical cohesion and adaptation to extratextual elements such as speaker gender and age. The result, according to Bawden [2018: i], is that “these MT models only have access to context within the current sentence; context from other sentences in the same text and information relevant to the scenario in which they are produced remain out of reach.” In her PhD thesis, Bawden proposed a range of strategies to integrate both extra-linguistic and linguistic context into the translation process: pre-processing strategies designed to disambiguate the data on which MT models are trained, post-processing strategies to integrate context by post-editing MT outputs and strategies in which context is exploited during translation proper. For the last two or three years, research about so-called “contextual NMT” has developed (“document-level MT”)48, but it mainly focuses on how much context needs to be taken account for Machine Translation evaluation (e.g., Rauf & Yvon [2020]), in order to better train the systems. Castilho et al. [2020] tested the context span for the translation of 300 sentences in three different domains (reviews, subtitles, and literature) and showed that “over 33% of the sentences tested required more context than the sentence itself to be translated or evaluated”, and from those, “23% required more than two previous sentences to be properly evaluated”. In one of her most recent publications, Castilho [2022] investigates how much context span is necessary to solve different context-related issues (reference, ellipsis, gender, number, lexical ambiguity, terminology) when translating from English into Portuguese. Using a corpus of 60 documents and six different domains (subtitles, literary, news, reviews, medical, and legislation), she shows that the shortest context span to disambiguate issues can appear in different positions in the document (including preceding, following, global, world knowledge), and that the average length depends on the issue types as well as the domain. She also shows that the standard approach of relying on only two preceding sentences as context might not be enough depending on the domain and issue types.
52Section 1. has provided the general background for our study by (i) discussing the theoretical underpinnings of context, (ii) presenting the translators’ needs regarding contextual information and the way this type of information has traditionally been found in dictionaries (iii), but also in other resources consulted by this distinct user group (iv). Despite the variety of resources at hand, and the fact that the distinction between those tools is not really clear-cut any more, the contextual information provided in the form of “additional examples” in online monolingual dictionaries can, to some extent, prove useful to translators as the analysis in Section 3. will show. But let us now introduce the methodology used to carry out that analysis.
- 49 The main aspects covered in those studies are: effectiveness of examples (in relation to the defini (...)
53As stated by Ptasznik [2023: 30], additional examples have become an essential feature of monolingual learners’ dictionaries. Despite this fact, this particular feature has surprisingly been little studied by scholars. It cannot be denied that numerous studies have focused on examples in dictionaries, as shown by the in-depth review of the literature on this topic by Ptasznik [2023: 30-33]49. But, to the best of our knowledge, only one metalexicographical study focuses on additional examples among other features (Heuberger [2020]), and only two (academic) studies have specifically focused on additional examples found in online dictionaries, namely Rundell [2015] and, very recently, Ptasznik [2023], whose findings will be summed up below.
- 50 https://www.antimoon.com/how/learners-dictionaries-review.htm, accessed on August 20, 2023.
- 51 According to his website, Szynalski founded a blog aimed at language learners in 2001 “to share the (...)
- 52 Cambridge Advanced Learner’s Dictionary [3rd edition, 2008]; Collins COBUILD Advanced Learner’s Eng (...)
- 53 Because he considered that some words “[do not] need a lot examples” (“you don’t need more than 4 e (...)
54Another study which is worth mentioning here is not an academic study per se but a review made by Szynalski [2009]50 in the context of English as a Second Language learning51, which was published online. He reviewed five English dictionaries for advanced learners that were available on CD/DVD at the time52. He first selected around a hundred words and phrases from various sources and genres (e.g., the BBC News website, Wikipedia, an education/technology website, a novel) based on the fact that they “would be interesting to an English learner” and that they “were useful for building one’s own sentences” (e.g., sales breakdown, façade or turn up somewhere). He then looked them up in the five above-mentioned dictionaries and took note, among other things, of the number of example sentences in the entry and the number of additional examples in a separate section called “Wordbank”, “Extra Examples” or similar. What stands out from Figure 1 confirms that, in the CD-ROM versions, some dictionaries already allowed for customization (i.e., additional examples were not automatically displayed, and it was up to the user to decide whether he/she wanted more examples) and that the feature had disappeared from the last CD-ROM version of COBUILD (cf. Section 1.3.3.). We compiled the data in Table 1 based on the figures provided in Figure 1. The analysis of the data found in Table 1 and Figure 1 reveals that (i) despite some slightly debatable methodological choices in Szynalski’s analysis regarding the way he counted the number of examples53, there is no denying that there is a very large number of extra examples in the dictionaries under study, in particular in COBUILD5 and LDOCE5, (ii) in most dictionaries (the exception being COBUILD5), additional examples have been filtered or chosen by the editor.
Figure 1. Screenshot of Szynalski’s findings54
Table 1. Number of extra-examples by dictionary in Szynalski’s findings
Cambridge 3
|
Collins 5
|
Longman 5
|
Oxford 7
|
Collins 6
|
58
|
180
|
131
|
32
|
0
|
55Heuberger’s study does mention the additional example feature, but since it depicts the then state of the art of monolingual online learners’ dictionaries “against the background of the unique opportunities of the electronic medium”, it only briefly examines this feature from the perspective of customization, hybridization and unlimited storage [2020: 409-412]. His study shows that (i) 4 out of the 6 dictionaries that he examines enable users to show or hide further examples, (ii) even though all publishers of those online dictionaries have their own corpora and would have the technical possibility of merging those corpora with the dictionary products, none has implemented a proper hybrid tool which would allow the user to have a direct access to the corpus, (iii) one area in which the extra value of online learners’ dictionaries appears in comparison with the corresponding print editions is the apparently tremendous increase in the number of example sentences and collocations – he cites the example of the entry goal (noun) in LDOCE, for which there is a 700-word difference in the size of the entry between the online and the print editions, and for which, in the online version, 24 further examples are added, in an extra section, to the 32 examples included in the regular entry. His straightforward suggestion that “The exact extent [of the apparently enormous increase in the number of examples and collocations] case would require – and be worth – a study of its own” caught our attention.
- 55 Digitale Wörterbuch der Deutschen Sprache, that we mentioned in Section 1.2.2.3.
- 56 https://www.wordnik.com/, accessed on August 27, 2023. For [more information on this dictionary, se (...)
- 57 For example, “In the entry for toxic, for instance, almost all of the ten example sentences illustr (...)
56Rundell [2015] identified example sentences as one of the three specific areas, along with inclusion criteria and definitions, “where traditional [lexicographic] policies may need rethinking” [2015: 310]. He first summed up the main shortcomings of the additional example feature on the CD-ROMs and DVD-ROMs that we listed at the end of Section 1.3.3., namely “little or no filtering (for quality, appropriacy, etc.) of the examples” and “no mapping of examples for polysemous words to specific senses”). Then, based on a quick look at three online dictionaries, the German dictionary DWDS55, the Oxford Dictionaries Online (ODO), and Wordnik56, he concluded (p. 319) that (i) more research should be done on the use and usefulness of the “concordance feature” or the dictionary-cum-corpus format as instantiated in the DWDS depending on the type of users’ needs & skills, (ii) there has been a huge improvement in the mapping of additional examples to meaning in at least one dictionary (ODO), (iii) the average number of additional examples in ODO that were corpus-derived was about three and (iv) in dictionaries such as Wordnik, which provides definitions from a range of traditional dictionaries in one column and web-sourced examples in a second column, semantic neology is an issue as recently-coined meanings have no corresponding definitions in the dictionaries57. However, his study only relied on a quick review of a very limited number of dictionaries and of entries.
57Ptasznik’s study [2023] is an empirical study that did include a translation task, but which was conducted within the framework of language learning. 213 subjects participated in the study (143 university students, 70 high school students, all Polish learners of English at an upper-intermediate or advanced level). They had to translate 27 Polish sentences into English (which had been chosen because of the verb they contained) and were provided a bilingual dictionary and, depending on the experimental or control group, no examples, three or eight examples that were taken from the additional example feature of 7 online learners’ dictionaries. The study’s first aim was to investigate the general usefulness of examples in English monolingual learners’ dictionaries in language production, in other words, to explore the effectiveness of encoding examples. Its second aim was to determine whether the presence of eight examples in a dictionary entry confuses or benefits English learners. The overall conclusions that Ptasznik reached were that the presence of encoding example sentences in dictionaries had a “a significant influence on language production in the context of Polish learners of English” [2023: 43] and that “exposure to a multitude of encoding example sentences (eight-examples group) in a dictionary, mixed with target structure and non-target structure examples, does not confuse English learners when it comes to obtaining relevant information in the context of a language production task” [2023: 43]. Apart from the fact that the target users were language learners and not translators, the study shows three other limitations regarding what we want to study. First, it only focused on encoding needs, not decoding ones. Second, it mostly focused on lexico-grammatical patterns of use (and their correct use by participants), and did not delve into areas such as level of semantic equivalence, register, etc. Third, it focused extensively on quantitative, not qualitative, data. Fourth, the experiment was not carried out within an ecologically valid environment, since the participants had no access to other tools, Internet, dictionaries or books.
- 58 All those websites were accessed extensively between May and September 2023.
58Our study relies on five popular online monolingual English dictionaries, namely:
(i) The Cambridge English Dictionary (henceforth CAMB) (https://dictionary.cambridge.org/dictionary/english/)
(ii) The Collins (Cobuild Learner’s) Dictionary (henceforth COBUILD) (https://www.collinsdictionary.com/dictionary/english)
(iii) The Longman Dictionary of Contemporary English Online (henceforth LDOCE) (https://www.ldoceonline.com/)
(iv) The Oxford Learner’s Dictionary (henceforth OALD) (https://www.oxfordlearnersdictionaries.com/)
(v) The Merriam Webster’s Dictionary (henceforth MW) (https://www.merriam-webster.com/)58.
- 59 The fifth one being the Macmillan Learners’ Dictionary, whose website was abruptly shut down on Jun (...)
The first four are widely acknowledged as being part of the ‘Big Five’59 as Lew [2010] names them, and we felt it necessary to add an American counterpart to those four British dictionaries. This is why we also included the Merriam Webster’s Dictionary, which is considered by Heuberger [2020: 404] as one of the “leading online M[onolingual] L[earner’s] D[ictionaries]”, alongside the other four we have mentioned.
59One methodological issue has to be mentioned here: our intention was to focus exclusively on learners’ dictionaries. However, we came to realize that the distinction is now blurred on several websites between dictionaries aimed at beginners, advanced learners and native speakers. For instance, the MW is not presented any more as a learners’ dictionary, and the Cambridge Learner’s Dictionary does not include any additional example because it is mostly aimed at beginners or intermediate learners. The Cambridge English Dictionary is described as being aimed at advanced learners of English, and includes additional examples, which is why we decided on that version of the dictionary. In fact, many dictionaries, such as the COBUILD, are now part of a portal, which leads to a rather confusing presentation. The COBUILD, for instance, is only accessible via the Collins English Dictionary.
- 60 For example, the part of speech for “that” is almost systematically wrongly identified in MW (acces (...)
60A quick look at grammatical words such as because or that in the four dictionaries under study showed that the treatment of the additional example feature for this category of words was clearly inadequate60, so we decided for this particular study to focus exclusively on lexical words, and in particular on one part of speech, i.e., nouns. We chose ten words which we deemed representative of a number of subcategories. Here are the criteria we used and the words we picked:
-
Type of language (general language vs. specialized language): explosion vs. lava
-
Morphology (simple noun vs. compound noun): house vs. greenhouse
-
Register (unmarked word vs. informal word): money vs. dough
-
Geographical variation (British word vs. American word): lorry vs. truck
-
- 61 Since wokeness was not found in many dictionaries, we extended the criterion to the term awareness.
Neologism: wokeness/awareness61
61We looked up the ten words in the five dictionaries under study and looked at the following variables for the analysis of the example sentences: (i) general presentation and labelling, (ii) number, (iii) size and format, (iv) mode of selection and (v) relevance for translators. These are analyzed in Section 3.
62We first looked at the way the five dictionaries under study named the additional example feature and how the additional examples were presented. The main information is summed up in Table 2.
Table 2. Various ways of labelling and presenting the additional example sentences
1
|
2
|
3
|
4
|
Dictionary
|
Label
|
Location
|
Show/hide further examples
|
CAMB
|
1. More examples
|
Within the dictionary entry
|
Yes
|
2. Examples of [name of searchword]
|
Extra section
|
Yes
|
3. All examples
|
Extra section
|
Yes
|
COBUILD
|
1. Examples of [name of searchword] in a sentence
|
Extra section
|
No
|
2. Sentences (dictionary & corpus)
|
Specific tab outside of the dictionary entry
|
(Yes)
|
LDOCE
|
Examples from the Corpus
|
Extra section
|
No
|
OALD
|
Extra examples
|
Within the dictionary entry
|
Yes
|
MW
|
1. Example sentences
|
Extra section
|
(Yes)
|
2. Recent examples on the Web
|
(Yes)
|
63Column 2 reveals a number of interesting facts. Not all dictionaries indicate explicitly that the examples provided are additional examples, which might be a bit misleading for users: in all cases, some examples are already included in the corresponding main dictionary entry. One dictionary (COBUILD, unsurprisingly) uses the word “sentence”, insisting on the fact that the headwords are presented in context, which might be more telling for advanced users such as translators. What seems a bit confusing is that, in three dictionaries out of five, there are in fact several ways of accessing additional examples: there are up to three different ways in the CAMB, and the COBUILD has both a list of examples at the very bottom of the page (Figure 2) and a separate tab under which both dictionary examples and corpus examples are included (Figure 3). Its usefulness might be questioned, all the more so as a quick look at the examples included in both options in COBUILD confirms some partial overlap (as will be seen in Section 3.2. about the number of examples). Another striking fact is that some dictionaries (LDOCE, MW) explicitly mention the source of the extra examples in the very label of the section (corpus, Web), which gives the user a hint about the fact that examples might not have been edited by lexicographers: on the one hand, they might be more authentic but, on the other hand, the reliability of the web-sourced examples might be questionable.
Figure 2. Extract from the entry explosion, noun in COBUILD
Figure 3. Extract from the Sentences tab for explosion in COBUILD
64Column 3 in Table 2 shows that there are two different ways of presenting the additional example feature: either within the dictionary entry in a distinct box, or as an extra section found after the dictionary entry or after several dictionary entries. When the extra examples are presented within the entry, they are necessarily found next to the relevant sense division, after “regular” examples, as can be seen in Figure 4 (CAMB) and Figure 5 (OALD). This suggests that the examples have been analyzed by lexicographers for sense mapping, which is extremely beneficial for users (as underlined by Rundell [2015]) but time-consuming for lexicographers, who have to do it manually. Only two dictionaries (CAMB, OALD) out of five have decided on that option, but only some sense divisions are illustrated by extra examples (e.g., in CAMB there are extra examples only for the first meaning (“the fact of something such as a bomb exploding”), but none for the other senses). The remaining three dictionaries, and the CAMB as well, have an extra section that is found after the entry. In the case of the LDOCE, the section is found at the very end of the entry, after the collocation boxes – when there are any, as can be seen in Figure 6. In the case of CAMB and COBUILD, the section appears after a potentially very long list of dictionary entries, since these learners’ dictionaries, as explained in Section 2.2., are in fact part of a dictionary portal. This, coupled with the fact that there is no sense mapping, might be considered a major drawback for users such as translators, who work under tight time constraints, since scrolling down the long list of dictionaries takes some time.
65Ease of access is an important criterion for this kind of user, and it goes hand in hand with customization. What the data in Column 4 implies is that for only half the dictionaries, users have the choice of hiding or displaying all or part of the additional examples. By default, extra examples in OALD are hidden, and users have to click on the “plus” sign in the title of the section to have them displayed. Conversely, in LDOCE for instance, extra examples are displayed permanently. CAMB is half-way: by default, all extra examples (the ones that are found within the dictionary entry) are displayed, and users can click on “fewer examples” if they want to see less. In MW (see Figure 7), a limited number (usually around three) are permanently displayed, and users can decide to see more. All in all, it seems that there is still rather limited customization of dictionary content in that respect and room for improvement, as Heuberger [2020: 409] has already noted. He concludes that “The more data electronic dictionaries offer, the more important it becomes for users to be able to determine how much and which information they require.” [2020: 412].
Figure 4. Extract from the entry explosion, noun in CAMB
Figure 5. Extract from the entry explosion, noun in OALD
Figure 6. Extract from the entry explosion, noun in LDOCE
66Table 3 below sums up the various numbers of additional examples found in the five dictionaries under study for the ten words we analyzed.
Table 3. Numbers of additional examples in the five dictionaries under study
67Before listing the various tendencies that this analysis has unearthed, a few methodological issues we were faced with when counting the examples need to be addressed.
First, as mentioned in Section 3.1., many dictionaries include extra examples in various locations. We therefore had to add up all those examples, and the sum is presented within a “total” column for four dictionaries out of five.
Second, in some cases the examples that are provided do not correspond to the right part of speech; in some other cases the examples actually illustrate a compound (cmp) in which the headword is found, not the headword by itself (e.g., truck driver instead of truck). Only the LDOCE makes a clear distinction between the two, so that we decided to include all additional examples notwithstanding the possible confusion.
68Here are the more significant findings that our analysis of the data reveals.
69All in all, the total number of extra examples provided in all five dictionaries for just ten words is huge: 1,306. This seems to confirm Heuberger’s assumption that the general increase in the number of examples in online dictionaries is enormous [2020: 411].
- 62 Historically, there was a clear difference between the way COBUILD, LDOCE and OALD made use of thei (...)
70The five dictionaries rank in the following decreasing order regarding the number of additional examples: CAMB, COBUILD, LDOCE, OALD, MW, with the first two dictionaries ranking far ahead: 436 for CAMB and 383 for COBUILD (which decreases to 330 if the extra examples taken from other COBUILD dictionaries are not taken into account). The number of extra examples they include is about four times as high as what is provided in MW (104). The ranking order we found is slightly different from what Szynalski found in 2009 (see Figure 1), where the ranking order was COBUILD, LDOCE, CAMB, OLD. The Cambridge Dictionary has clearly changed its policy. What is also striking is that the dictionaries that rank first are the ones that are overtly corpus-based62.
71The very last column of Table 3 displays the total number of examples per word. Here is the list of words in decreasing order of the number of extra examples: money (259), house (252), explosion (180), greenhouse (100), awareness (95), truck (88), lorry (85), dough (76), lava (74), wokeness (11). The general tendency seems to be that it is the most frequent and general words that get the highest number of examples (money, house, explosion) while more specialized (lava) or recent (wokeness) words get fewer examples. Some words do not get any additional examples at all, in particular, rather surprisingly, dough in CAMB. MW does not provide any additional examples at all for five dictionaries out of ten (dough, lorry, truck, wokeness, awareness) in its “more examples” section (but there are examples taken from the Web).
72Another way of analyzing the data which might be more informative than the overall number is to look at the mean number of examples per word in relation with the frequency of the word as found in a general-purpose corpus. In Table 4, the ten words under analysis are ranked in decreasing order of the mean number of examples per word. To investigate frequency, we relied on several indicators: (i) the “Longman Communication 3000”, which is a list of the 3,000 most frequent words in both spoken and written English63, based on statistical analysis of the 390 million words contained in the Longman Corpus Network, (ii) the number of (frequency) stars or dots provided in the LDOCE64, (iii) the Oxford 3,000 and 5,000 lists, in which the words have been chosen “based on their frequency in the Oxford English Corpus and relevance to learners of English”65, (iv) the Common European Framework Reference (CEFR) level as provided by the Oxford 3,000 and 5,000 lists.
Table 4. Mean number of examples per word and word frequency
73What Table 4 reveals is that the number of examples per word seems to strongly correlate with the frequency of the word in the general language and thus to confirm our hypothesis: the most frequent words (money, house, explosion) get the most examples. Greenhouse, which gets a relatively high number of examples, is considered as a rather high-frequency word in the Oxford lists, which are more recent than the Longman 3000 (and which might rely on a corpus when environmental issues are more present). Words that are more specialized (lava) or recent (wokeness) are less frequent and thus get fewer examples. The case of dough is more complex: its low frequency might account for the rather low mean number of examples, but further investigation could reveal whether this is related to a register issue (informal) or a specialization (the baking field) issue.
74Such a correlation might be relevant for learners of English, who are the primary target groups of the dictionaries under study, and who need to master the core of the English language in order to communicate effectively in both speech and writing, but for translators, numerous examples for words that are less frequent would be particularly useful.
75On the whole there seems to be no precise policy regarding the number of examples to be provided in each dictionary for each word. For instance, in the “all examples” section of CAMB there are 84 examples for money vs. seven examples for lorry (which is slightly more than for truck, even though the former is supposed to be more frequent in British English); in OALD there are 67 extra examples for money and none for lava, greenhouse and dough. For all the dictionaries that are corpus-based, it probably depends on the contents of the corpus being used.
76The only aspect for which there seems to be some consistency is the number of examples that are first displayed before the user chooses to get more examples in CAMB (five examples, except for lava, greenhouse, wokeness, i.e., specialized or recent words), and in COBUILD (ten examples, except for explosion and wokeness). As for MW, it consistently provides eight examples in the Web-sourced examples section, three of which are initially displayed (Figure 7).
77Finally, there does not seem to be any particular criteria in the ordering of the additional examples, which might be randomly sampled and ordered in the case where they are automatically retrieved.
Figure 7. Extract from the entry explosion, noun in MW
78We analyzed the way the additional examples are visually presented to the user. Table 5 sums up whether the search word is highlighted or not.
Table 5. Highlighting of the search word in the five dictionaries under study
Name of dictionary
|
Label
|
Highlighted Search word
|
CAMB
|
1. More examples
|
No
|
2. Examples of [name of searchword]
|
Yes (italics)
|
3. All examples
|
Yes (italics)
|
COBUILD
|
1. Examples of [name of searchword] in a sentence
|
No
|
2. Sentences (dictionary, corpus)
|
No
|
LDOCE
|
Examples from the Corpus
|
Yes (bold)
|
OALD
|
Extra examples
|
No
|
MW
|
1. Example Sentences
|
Yes (italics)
|
2. Recent examples on the Web
|
Yes (italics)
|
79What stands out is that not all dictionaries have highlighted the search word (either by adding italics or bold type), which makes it harder to spot. The lack of this usability feature may be considered a drawback for translators. Indeed, one of our previous studies has shown that a context that was considered useful by translators was one that highlighted the search word: a majority of the translators interviewed “stressed the importance of clearly displaying the search word in bold or colour font in the context, for easier consultation” (Picton et al. [2018: 126]). The lack of visibility also makes the additional examples different from a KWIC concordance format where the search word is easy to observe, as we will discuss below.
80The length of the additional examples varies greatly from a very short noun phrase to a rather long and complex sentence, as exemplified in Table 6 for explosion and occasionally for lava and awareness. The web-based example for explosion in MW is 38 words long.
Table 6. Length of the additional examples
Name of dictionary
|
Label
|
Noun phrases
|
Full sentences
|
CAMB
|
1. More examples
|
No
|
Yes: Eight people, including two children, were injured in the explosion
|
2. Examples of [name of searchword]
|
No
|
Yes: Section 4 contains necessary conditions for heteroclinic points to be chain explosions.
|
3. All examples
|
No
|
Yes: The average velocity of the glass fragments was 295 ft./sec in the 18 g air-sphere explosions, and 215ft./sec in the 16g helium-sphere explosions.
|
COBUILD
|
1. Examples of [name of searchword] in a sentence
|
No
|
Yes: The force of the explosion could have blown him out into the sea.
|
2. Sentences (dictionaries)
|
No
|
Yes: Three people have been killed in a bomb explosion in northwest Spain.
|
3. Sentences (corpus)
|
No
|
Yes: The second guard was kneeling on the floor, his hands clasped over his eyes, temporarily blinded by the explosion.
|
LDOCE
|
Examples from the Corpus
|
Yes: An explosion of laughter
A nuclear explosion
Political awareness
|
Yes: Rabbits and ducks have been contributing to a population explosion in the park.
|
OALD
|
Extra examples
|
Yes: a great explosion of creativity
a sudden explosion in the number of students
|
Yes: A huge explosion rocked the entire building.
|
MW
|
1. Example sentences
|
Yes:
a flow of molten lava
|
Yes: The filmmakers staged the car’s explosion.
|
2. Recent examples on the Web
|
No
|
Yes: And the Kremlin’s image took another battering this week with the apparent death of the mercurial mercenary leader Yevgeny V. Prigozhin in a crash following what U.S. and other Western officials say was an explosion aboard his private jet.
|
81Most of the examples take the form of full sentences. In examples that are automatically retrieved, sentences are defined as a group of characters starting with a capital letter and ending with a period. This kind of segmentation, which is the same as the one used in CAT tools (see Section 1.4.4.) might yield some problematic results. For instance, one can find the following truncated sentence in COBUILD for lava:
I was watching the lava flow out of Mt.
Christianity Today (2000)
where the end of the sentence has been omitted due to the fact that the volcano’s name already contains a period, a shortcoming that is well-known to people who use TMs and corpora. The length and both lexical and syntactic complexity of the sentences vary rather markedly depending on the dictionary and, quite obviously, on whether the sentences have been made-up or edited by lexicographers, or have not been edited at all (e.g., examples from the Web in MW). Examples that are shorter than a sentence seem to be taking the form of Multi-Word Expressions, in particular noun phrases: this is the case in three dictionaries: mostly in LDOCE, but also in OALD and occasionally in MW, but there are no explicit criteria.
82The phrase or the sentence format differs from the format allowed by KWIC concordances in that the search word is not instantly visible, unless in bold or italics, but, most importantly, because the searchword can be found at any place within the sentence, including the very beginning (e.g., explosion in OALD) or at the very end (which happens in most cases for explosion in Table 5). This implies that the user does not have access to the context to the right and/or to the left of the searchword, unlike a KWIC concordance format that uses a wide layout to display maximum in-context information – something that a real direct access to the underlying corpora would allow.
83The choice of noun phrases and full sentences can be accounted for by the fact that it is the format that most users, including translators, are familiar with and we can gather that the noun phrases have been selected because they correspond to collocations, but a KWIC format (which used to be available in CD-ROM versions of some dictionaries, see Section 1.3.3.) would allow for more context, and, in the case of a large number of examples provided, would “bring out patterns that are “immediately obvious” but which otherwise may pass unnoticed (Clear [1987: 41]).
84Based on the information that was available on the various websites, we identified which dictionaries provided additional examples that had been automatically retrieved. The information is summarized in Table 7 below.
Table 7. Mode of selection of the additional examples
Dictionary
|
Label
|
Automatic retrieval
|
CAMB
|
1. More examples
|
No
|
2. Examples of [name of searchword]
|
Yes
|
3. All examples
|
Yes
|
COBUILD
|
1. Examples of [name of searchword] in a sentence
|
Yes
|
2. Sentences (dictionary)
|
Yes?
|
|
3. Sentences (corpus)
|
Yes
|
LDOCE
|
Examples from the Corpus
|
No
|
OALD
|
Extra examples
|
No
|
MW
|
1. Example sentences
|
No
|
2. Recent examples on the Web
|
Yes
|
- 66 Fox [1987: 14] explains how examples that have been edited by lexicographers may sound hardly authe (...)
85Three dictionaries out of five explicitly mention the fact that (some of) the examples have been automatically selected, either from the Web or from a more specific corpus, as we will see below, and what is implied is that their content has not been edited by lexicographers. (Web) examples in MW are “programmatically compiled from various online sources”, examples in the CAMB are “from corpora and from sources on the web” and examples in the COBUILD “have been automatically selected”. COBUILD is the only dictionary that states that “the corpus examples were extracted using an algorithm and have not been editorially reviewed”, a fact that the publisher sees as a potential weakness. For the remaining two dictionaries, and part of MW and CAMB, even though it is not officially announced by the publishers, the examples seem to have been manually selected, and, most probably edited, or even, that they have been made up. The part of the COBUILD that gives access to examples from other Collins dictionaries gives no indication regarding the way these examples have been compiled. Examples in the “Example Sentences” section of the MW for the word explosion seem “too good” to be true and not very natural, especially in contrast with the authentic examples taken from the Web which contain many more details, as can be seen in Figure 7. The same can be said of the example “I don’t have much dough” found in the “Example Sentences” section of MW66.
86Automatically-retrieved examples are a rather recent trend which seems to be expanding, as other online monolingual dictionaries now have adopted this feature (e.g., the free, online monolingual French dictionary Le Robert Dico en ligne67).
87The fact that examples are automatically retrieved has pros and cons. Obviously, this allows for a very large amount of data, and, as we will see below, of potentially up-to-date data. On the other hand, what may be considered a drawback is that, in some cases, the examples retrieved are irrelevant for various reasons:
-
the part of speech might be wrong. This happens when the corpus that is used has not been part-of-speech tagged. For example, in CAMB, the first two lines of the extra examples of house show first an example of the verb (“However, this is not because of the objects it houses”) and then of the noun (“Their relationship with the houses is beyond doubt, as they do not occur further away from the plan”). This also happens in the first two lines of the extra examples provided in the COBUILD for house (“Would they give one up to house a refugee family, The Sun, 2016”), and is also often the case in MW, with examples taken from the Web. While it is understandable that algorithms might fail to work on Web data which are not necessarily POS-tagged, it is rather surprising that both the Collins and the Cambridge corpora seem not to be POS-tagged. Another explanation could be that they are just instances of POS-tagging errors, or that the use of the POS-tagged version of the corpora is restricted to the lexicographers.
-
the word has not been identified as being part of a compound, which should appear in a separate dictionary entry. Greenhouse and lava can be considered as specialized words or terms, which implies that they are often used to build other, more complex terms, typically greenhouse gas and lava flow. The algorithms used to automatically retrieve the examples do not make a distinction between the stand-alone term and the term within a compound. For instance, five out of the first six additional examples provided for greenhouse in COBUILD yields do not illustrate the use of greenhouse, but the use of greenhouse gas. The same can be said of the examples provided by LDOCE for greenhouse, and greenhouse gases, while MW yields greenhouse plants and greenhouse farming. Among the 8 additional examples provided by LDOCE for lava, only half illustrate lava, while the other half illustrate lava flow (1), submarine lava flow (1), molten lava (1), river of lava (1). COBUILD provides at least one context for lava which, in fact, illustrates lava lamp, which might be totally irrelevant for the translator of a text dealing with volcanoes. This is an issue for translators who are often faced with very precise terms, and a well-known problem in terminology where it is often hard to determine whether a lexical unit is a term per se or just part of a compound term (L’Homme [2004: 166-200]). This problem, however, has been addressed (but not solved) by automatic term acquisition tools which can extract both simple and complex terms from corpora68.
-
there might be homographs – this issue is raised by the lexicographers about the free online version of the French Robert Dico en ligne69, but there were none in the corpus of entries we examined. This is probably the issue that lexicographers at CAMB have in mind when they ask users to report example sentences where “the word in the example sentence does not match the entry word”.
-
the same examples have been retrieved several times. Duplication of examples (which often happens with corpus tools) is an issue that lexicographers are aware of: they now allow users to send feedback about the data found in the dictionary, in particular in that respect. MW users are expected to report “duplicate example sentences”. Unlike some other issues such as homographs (iii) or absence of sense mapping (v), this problem could easily be solved automatically.
-
finally, which seems rather self-evident, there is no sense mapping, and there are many other semantic issues which will be discussed below. Lexicographers at MW are aware of the semantic weaknesses since users are invited to report the extra examples that fall into the following two categories: “error in the example sentence”, and “example sentence does not match the definition”. However, those two categories are too vaguely phrased (for instance, one may wonder which definition we are talking about when the word is polysemous), and might in fact better correspond to neology-related issues.
88To conclude on that aspect, what is particularly striking is that the main drawback that seems to matter to the publishing houses regarding automatic retrieval is the appropriateness of the content. COBUILD, MW and CAMB all warn against “opinions” or “sensitive content” that do not reflect the respective publisher’s opinions. Both CAMB and MW expect users to report example sentences that contain “offensive/harmful/abusive content” but all other issues, in particular the semantic one, are not mentioned.
89Translators need to make sure the information they use to make translation choices is reliable, so they need to be provided with as much information as possible about the sources where the examples are taken from, which is why we now investigate this aspect.
90Only one dictionary, COBUILD, provides additional examples taken from other dictionaries from the same publishing house. It may be beneficial for users, but there is no mention of the dictionaries they are taken from and no formal indication as to whether they have been coined by lexicographers or taken from an authentic corpus. Given the size of the examples and the type of vocabulary they contain, it could be both. “The club is planning a public debate” or “This house believes that journalism has not gained from the introduction of new technology” seem to come straight from a corpus, while “she has moved to a smaller house” could have been either made up or corpus-based, but simplified. Providing metadata on the source of the examples would make the feature even more useful.
91Three dictionaries out of five explicitly mention that the additional examples are taken from a corpus: CAMB, COBUILD, LONG.
92CAMB mentions the Cambridge English Corpus as the source of each of the additional examples. This corpus is a multi-billion large corpus put together by Cambridge University Press, whose content has kept growing for over 20 years. In 2014, it reached 1.8 billion words, and contained various samples of written and spoken “expert” speaker English (i.e., English spoken by native speakers or by people who have a very high proficiency in English). It contains various genres and registers, namely “newspaper reports, journal articles, radio broadcasts, emails, text messages, tweets, family conversations over dinner, reports of business meetings”70. Some occasional examples are taken from the Hansard archive. The corpus used is thus rather up-to-date, but a major shortcoming is that the only source which is mentioned for each of the examples is the Cambridge corpus as a whole, while metadata regarding the subcorpus used would have been more informative.
93According to the information gathered in various parts of the website71, all of the examples in COBUILD are examples of real English, taken from the Collins Corpus, a corpus of over 20 billion words which is updated monthly and which “contains written material from websites, newspapers, magazines72 and books published around the world, and spoken material from radio, TV and everyday conversations”. The Bank of English, a subset of 650 million words, is used by lexicographers on a daily basis, while the full Collins Corpus is used to check more widely for extra information. Another subset is the Collins Technical Corpus, which is made up of academic and professional journal articles from a variety of subject fields, including science and humanities. The Example sentences that are recorded in a separate tab are said to be extracted from Collins “resources”73, which might encompass more resources than the Collins Corpus based on the examples we analyzed. In fact, most of the additional examples we studied were taken from the British press; for instance, here are the sources of the 10 extra examples for the word money: The Guardian (5), the Sun (2), The Times/Sunday Times (3), but this is followed by an extra section called Quotations, which is a rag bag of the Bible, literature, pop culture (Bob Dylan, Arnold Schwarzenegger), and idioms and proverbs, as can be seen in Figure 8, and whose usefulness might be questioned, especially with quotations that are in fact translations (Dostoevsky). For truck, there is no consistency in what is presented and how it is presented, as so-called “literary” quotations are found among press quotations.
Figure 8. Quotations for the entry money, noun in COBUILD.
94Unlike the CAMB, the COBUILD does provide some metadata: name of the source (in most cases, the title of the newspaper or magazine) and year of publication. In the case of literary quotations, the name of the author is provided, but it is not always the case for the title of the book or the year of publication.
95All of the extra examples taken from LDOCE are very explicitly taken from “the” Corpus as explained in Section 3.1. There is hardly any information found on the website regarding the contents of the corpus, so we might assume that the corpus from which the examples are retrieved is the “Longman Citation corpus”, as it is referred to in the print versions, which has traditionally been used by lexicographers at Longman, originally as the Longman/Lancaster English Language Corpus which contained over 30 million words in 1993 (Summers [1993]).
Finally, OALD does not explicitly mention on the website that the examples are taken from a corpus.
96In short, here are the facts that stand out: (i) the corpora that are being used as a basis for the examples are not always very clearly presented to users, (ii) the amount of metadata provided ranges from none (LDOCE, OALD) to (in the best-case scenario, which only applies to literature) author’s name, title and date (CAMB). These first two findings can be considered an issue for translators as one of our previous studies, based on empirical evidence, has shown that a useful context for translators is “a context originating from an identifiable source” (Picton et al. [2018: 124]). After performing a translation task in an ecologically-valid environment during which they had to choose which automatically-retrieved contexts they deemed most useful, translators voiced that they would have liked to know the sources of the contextual resources they had to assess so they could be assured of their reliability and relevance (for instance, in terms of textual genre). One of the contexts that was most frequently chosen by translators (13 out of 42 participants) was one of the very few that provided its source. (iii) since corpus information and metadata are not systematically provided, one may wonder how often the corpora really are updated (are they closed or open?) and how often the examples are retrieved from the most up-to-date versions of the corpora. On the dictionary’s blog74, COBUILD lexicographers warn, for instance, that “all Wikipedia examples [were] retrieved in 2015” and “acknowledge that the Wikipedia content may vary or update from time to time” (which implies that they may use Wikipedia on top of their in-house corpus). They ask for user input: “if users discover any Wikipedia source that does not appear up to date, please notify us and we will address [sic] accordingly”. This obviously is problematic for neology – something that may matter for translators who sometimes have to deal with very recent texts; (iv) most importantly, none of the dictionaries has been hybridized with a corpus: no dictionary provides direct access to the corpora and, as a consequence, there is no concordancing feature, so no access to a large version of the co-text. This means that, in that respect, not only has there been no improvement compared to the CD-ROM formats that we described in Section 1.3.3., but there has been a step backwards. Even if we have shown that translators are, in most cases, still a bit wary of corpus tools for translation practice, there could be a middle way, as we will explain in Section 4.
97Only one dictionary out of the five under study includes additional examples that are automatically retrieved from “the Web”, namely MW (which is also one of the reasons why we included it in our study). According to the website, the extra examples “are programmatically compiled from various online sources”. However, the sources are not detailed. From the examples we analyzed, we gathered that the main websites that were being used for data collection belonged to various genres: newspapers and magazines from all over the United States (New York Times, Washington Post, Los Angeles Times, the Arizona Republic, USA Today…), American news websites (CNN, NBC news), nonfiction storytelling (Longreads), but also more specialized resources. The types of sources included here are, in fact, very similar to the ones found in the “in-house” corpora that we mentioned in the previous subsection, but we felt it necessary to make “the Web” a different subsection because the name was used as such by MW – which is why we use inverted commas.
98What is striking is that, for the three words under analysis that could be considered as belonging to a specialized field (greenhouse, lava and dough), the examples that are provided come from very relevant specialized or semi-specialized sources: e.g., for greenhouse: Better Homes and Gardens, Popular Mechanics, Scientific American, House Beautiful, Ars Technica; for lava: Smithsonian Magazine and Travel+Leisure; for dough; Southern Living and Better Homes & Gardens. Table 8 presents some of the examples that were provided at some point for those three words between early June and late August 2023.
Table 8. Example sentences taken from specialized sources
dough
|
Drizzle in water, 1 tablespoon at a time, and pulse until dough begins to clump together.
—Pam Lolley, Southern Living, 25 Aug. 2023
|
When the cycle is complete, remove dough from machine.
—Bhg Test Kitchen, Better Homes & Gardens, 22 Aug. 2023
|
greenhouse
|
During power outages, close the curtains during the day to keep your home from heating like a greenhouse.
—Lucy Tu, Scientific American, 5 Aug. 2023
|
Unlike other small greenhouse kits on the market, this greenhouse is not tall, measuring only three feet in height.
—Rachel Center, Better Homes & Gardens, 29 June 2023
|
lava
|
Flowing lava permanently altered the surrounding landscape and destroyed more than 700 homes, Brigit Katz reported for Smithsonian magazine at the time.
—Nora McGreevy, Smithsonian Magazine, 20 May 2020
|
During the initial eruption, fountains of lava spewed from fissures in the Earth.
—Evie Carrick, Travel + Leisure, 9 Aug. 2023
|
99Another major advantage of using web-crawled data is that the linguistic evidence is up to date – something traditional corpora such as the ones used by COBUILD, for instance, cannot really compete with: we found very few examples from 2021 and 2022, and none from 2023. Even though the website does not give any indication about the frequency with which all examples are updated – what is mentioned is just the fact that examples are programmatically compiled “to illustrate current usage of [the looked-up word]”, regular look-ups on the website have shown that the web-based examples changed approximately every other week. This provides up to date material, but can also seem a bit frustrating for a user who consults the entry several days in a row and does not always find the same information.
- 75 For a thorough analysis of the treatment of woke in existing French general purpose dictionaries an (...)
100Very frequently updated data also implies usefulness for neology, in particular semantic neology. MW is the only dictionary, out of the five under study, that provides substantial relevant information for wokeness, more precisely for the adjective woke. Even if most of this information is not presented within the additional example section, it seemed relevant to comment on it here. As Figure 9 shows, the entry for woke and wokeness is very different from the entries for all the other words that we analyzed in MW as it includes many long, authentic examples (with a mention of the source, in most cases the name of a journalist) within relevant sense divisions, which also include pragmatic information (e.g., disapproving, often used in contexts that suggest…). This means that the data has been thoroughly analyzed by lexicographers. Nowadays formal neologisms but also semantic neology tend to be recorded more quicky and more extensively by fully collaborative dictionaries and crowdsourced dictionaries than by traditional dictionaries (Sajous, Josselin-Leray & Hathout [2018], Sajous & Josselin-Leray [2022: 350],), so MW’s extensive treatment of woke and wokeness, two rather controversial terms75, is worth mentioning here.
Figure 9. Screenshot of the entries woke, adjective and wokeness, noun in MW
101Carrying out a fine-grained semantic analysis of the 1,306 additional examples provided for the ten words would have been worth another paper, so we will restrict the analysis here to some general tendencies that we were able to identify.
102As shown above, only the examples provided in the “Examples Sentences” section of the MW (and potentially some examples found in the other COBUILD dictionaries) seem to have been made-up by lexicographers. The bulk of the examples are thus either corpus-based but edited by lexicographers (LDOCE, OALD) or raw extractions from corpora (CAMB, COBUILD, web-based examples in MW). While OALD provides extra examples that have been edited but, most importantly analyzed for sense mapping, LDOCE only provides long lists of unsorted examples, and the raw extractions provided by the remaining three dictionaries have not been filtered at all. In the latter case, there might be irrelevant examples because of duplication, wrong POS identification or wrong headword identification (part of a compound; e.g., lava lamp instead of just lava), as we showed earlier.
103The ordering of extra examples in all dictionaries, especially the ones that are automatically retrieved, does not seem to match any specific criteria, which is an issue for all types of users. The examples that come up first do not necessarily correspond to very frequent uses of the searchword, which might be an issue for language learners. For instance, the first extra example that is given for truck in COBUILD is “People have little truck with that kind of things, anyway” (Times, Sunday Times 2016). It does show an interesting pattern (to have (…) truck with sb/sth), but it does not help regarding the core meaning of the word or its most frequent use as a vehicle. Such an example, however, could be more relevant for a translator, who is more interested in examples related to less frequent uses of the word or of specific patterns. But he/she has no guarantee that the less frequent uses will come up first.
104Another consequence of the absence of sorting of the data extracted from corpora is the under- or over-representation of a given meaning, which is most probably very dependent on the contents of the corpora that are being used. This is particularly striking in the case of dough, where the informal money-related meaning is overwhelmingly underrepresented in the extra examples and the baking-related meaning overrepresented. Out of the ten extra examples provided in LDOCE, only two are related to the informal meaning (“He only married her for her dough”, “I’d go on vacation three times a year too, if I had his dough!”). The ten contexts that are automatically displayed for dough in COBUILD only evidence the baking-related meaning, but a quick look at the other contexts accessible via the “Sentence” feature allowed us to reach the same conclusion. The same kind of problem has been observed, for instance, with explosion and house in MW, where the more figurative meanings (e.g., an explosion of anger, the whole house) are underrepresented. Extra examples that correspond to the societal meaning of awareness cannot be found in any of the dictionaries of our study. This balance issue might be representative of the real difference in frequency between the two meanings, but a user who is not familiar with this kind of corpus-related issues might be left with no clue.
105At the very beginning of the paper (Section 1.1.3.), we mentioned that several scholars or terminologists make a distinction between various types of contexts. Sorting the 1,306 examples under analysis based on specific types of contexts would have been a very tedious task, so here we just analyze a few that caught our attention as we were browsing through them. We picked 20 which are presented in Table 9, and for each example we tried to roughly assess the type of contextual information they contained, based on the classification we presented in Section 1.1.4. Such an analysis may be considered as subjective as it is sometimes hard to make clear-cut decisions, which is why there are some question marks here and there in the table.
Table 9. Classification of 20 extra examples based on the type of contextual information contained
- 76 We have used brackets here since the collocational pattern found here applies to lava lamp more tha (...)
#
|
Context
|
Dictionary
|
Sem.
|
Colloc.
|
Syntac.
|
1
|
But this is not lava. The Guardian (2015)
|
COBUILD
|
No
|
No
|
No
|
2
|
You want lava lamps now? The Guardian (2015)
|
COBUILD
|
No
|
No
|
No
|
3
|
Twelve of the pictures were filler items representing the following categories: grapes, hamburger, horse, tree, dinosaur, truck, fork, plane, bike, toothbrush, keys, and fish.
|
CAMB
|
No
|
No
|
No
|
4
|
During the ceremony, which took place on the lava fields, we both cried when we were reading our vows.
Times, Sunday Times (2014)
|
COBUILD
|
Yes?
|
No
|
No
|
5
|
A lava lamp glows on a wooden chest. The Guardian (2018)
|
COBUILD
|
No
|
(Yes)76
|
No
|
6
|
Money isn’t everything.
|
LDOCE
|
No
|
Yes
|
No
|
7
|
Schlieren records were obtained of a number of trial air and helium-sphere explosions at several initial sphere pressures.
|
CAMB
|
Yes?
|
No
|
No
|
8
|
The performance isn’t top notch and there’s no wireless charging, but this is still a great phone for the money.
—Simon Hill, WIRED, 12 July 2023
|
MW
|
Yes
|
Yes
|
No
|
9
|
The company cannot meet demand, and has seen an explosion of customer complaints.
|
LDOCE
|
Yes
|
Yes
|
No
|
10
|
In the rainy season in particular, potholes, floods, swamps and filth make it extremely difficult for cars and trucks to ply the roads.
|
CAMB
|
Yes
|
Yes
|
No
|
11
|
Thanksgiving weekend we loaded up a truck and headed west. Christianity Today (2000)
|
COBUILD
|
Yes
|
Yes
|
No
|
12
|
Reports suggested that on a bridge on this road a logging truck was blocking the traffic completely.
Stewart, Bob (Lt-Col) Broken Lives (1993)
|
COBUILD
|
Yes
|
Yes
|
No
|
13
|
A dozen people suffered minor injuries after a lorry jackknifed on an icy M62.
|
OALD
|
Yes?
|
Yes
|
No
|
14
|
The motorway was closed by an overturned lorry.
|
OALD
|
Yes?
|
Yes
|
No
|
15
|
The thawing of permafrost represents a positive feedback that amplifies warming by releasing more greenhouse gas into the atmosphere.
—Scott K. Johnson, Ars Technica, 6 Feb. 2020
|
MW
|
Yes
|
No
|
Yes
|
16
|
All this can be interpreted as an attempt to diminish the cost of abating pollution from buses and trucks.
|
CAMB
|
Yes
|
No
|
Yes
|
17
|
Thus, morphological awareness tended to progress in a clear developmental pattern in relation to spelling of morphemes indicating past tense.
|
CAMB
|
Yes
|
Yes
|
No
|
18
|
The volcano began to expel lava on Sunday night, and it is estimated that 149 tonnes of sulfur dioxide were emitted on that day alone, researchers said.
—Julia Jacobo, ABC News, 14 June 2023
|
MW
|
Yes +
|
Yes
|
No
|
19
|
Geologically, the only reasonable conclusion is that the lavas were erupted subaerially, or perhaps emplaced as sills close to the surface in some cases.
|
CAMB
|
Yes
|
Yes
|
Yes
|
20
|
It is important to note that our model assumes that the lava was erupted in an isothermal state at its melting temperature.
|
CAMB
|
Yes
|
Yes
|
Yes
|
106Here are the general trends that can be observed based on this selection of examples:
-
Some examples do not seem to bring any useful information at all.
This is the case for examples (1), (2) and (3), and the only function that could be assigned to them is mere attestation (cf. the attesting contexts defined by Dubuc and Auger & Rousseau and presented in Section 1.1.4.). Typically, they are very vague examples (2, 3), or examples with problematic anaphoric references (1).
-
Some examples bring very limited information or only one kind of information.
The only kind of semantic information that example (4) could bring, even if this may sound a bit far-fetched, is that lava fields is a place in which some events can take place. Example (5) only shows the collocation between lamp and glow (but this does not provide any specific information regarding lava).
-
Some examples might be wrongly classified.
Example (6) can in fact be considered an idiom and should be classified as such, preferably with an explanation or definition.
-
Some examples lack intelligibility.
Coupled with the fact that there is no access to the surrounding context, the vocabulary in some examples is so complex that the example might not prove very useful to users. This is the case for example (7), which is very highly specialized but with no defining elements, but also, to a lesser extent, for example (8) where the combination of informal vocabulary (top notch), and rather specialized vocabulary (wireless charging) makes the context slightly more difficult for non-native speakers, who might not spot immediately the fixed expression “for the money”.
-
Some examples do provide several types of information at the same time.
A number of contexts provide both collocational patterns and some kind of semantic information: this is the case for examples (9) to (12), where we can see, for instance, that truck collocates (as an object) with the verb load, or (as a subject) with the verb block, and that it is used in the semantic field of transportation and vehicles (headed west, bridge, road, traffic…). Some contexts can bring both collocational pattern and pragmatic information regarding geographical variation (lorry is used in two contexts, (13) and (14), that relate to British life and British English – M62 and motorway). Some can bring some kind of semantic information and some syntactic information: example (15) shows the cause/consequence link between warming and greenhouse gases and the syntactic structure to be released into, example (16) shows the conceptual link between pollution and trucks and the preposition in pollution from trucks. Example (17), which relates to the specialized field of linguistics, provides both semantic and collocational information.
-
Finally, some examples can be considered to be much more informative than others.
This is the case of examples (18) to (20), which are all specialized contexts. Example (18) is particularly rich conceptually speaking as it provides information on what volcanic eruptions consist in (expulsion of lava and emission of sulfur dioxide by volcanoes) and specialized collocational patterns (a volcano expels lava; gases such as SO2 are emitted). The addition of “researchers said” is also a reliability indicator for the user. All three types of information can prove particularly useful for translators. Examples (19) and (20) also provide some conceptual information related to the field of geology and volcanology, but also show a very interesting pattern that is only found in specialized discourse: the transitive use of the verb to erupt (Josselin-Leray [2005: 453-455]; Josselin-Leray [2010]).
107What is rather striking is that very few metalinguistic contexts and very few defining contexts (two types of contexts that we identified in 1.1.4.) are found within the additional examples available in the five dictionaries under study, unlike what is recorded, for instance, in term banks. These types of examples might have been used directly by lexicographers when writing the definitions (Krishnamurthy [1987: 66]), so they may indirectly benefit users. But instead of trying to fit each of the examples into one particular category of context, another fruitful way of looking at the various types of examples might be to analyze their ‘degree of richness’ by resorting to the notion of “Knowledge Rich Context” which has been extensively used in the field of terminology and translation (Josselin-Leray et al. [2014], Picton et al. [2018]) – the examples analyzed above range from “poor” contexts (ex. (1) to (3)) to “very rich” contexts (ex. (18) to (20)).
108To briefly conclude on Section 3., it seems that more choice is not necessarily synonymous with improved choice, and one may wonder to what extent such a large number of extra examples might be useful to users, and especially translators. The lack of sense mapping in particular does not help for text comprehension or decoding, and the fact that collocational patterns are not always clearly highlighted within the examples does not provide immediate help for production or encoding purposes.
109This section first puts forward a number of suggestions for improvement of the “additional example” feature with regard to the variables analyzed in Section 3 (Section 4.1.), then it presents some avenues to be explored for further analysis (Section 4.2.), before concluding on the “translator’s dream tool” (Section 4.3.).
110One of the simplest suggestions for improvement regarding the presentation of additional examples would be, for dictionaries such as CAMB or COBUILD which are in fact portals, to group together in one single section all the examples that are scattered in several locations. The presentation in COBUILD is particularly confusing because the same ten examples that are found in the first section of the “example in a sentence” feature are also found in the “sentence” tab, either at the very beginning or sometimes mingled with all the other sentences.
111The labelling could also be improved in some cases: the label “examples from the corpus” for additional examples in LDOCE is misleading as it gives users the impression that examples provided within the dictionary entry are not taken from corpora while the very same examples are sometimes provided both within in the entry and in the extra section (e.g., “an explosion of laughter”).
112Our analysis has shown that the number of additional examples available for a given word varies greatly, from simply none (with no obvious explanation, except, maybe, for a neologism such as wokeness) to above 80 (money in CAMB). Given the current opportunities provided by huge corpora and the Web, it seems to us that “none” should not be an option, and that some extra examples could and should be systematically provided.
113On the other hand, the presentation mode consisting in a very long list of examples that need to be scrolled down (as can be found in CAMB and COBUILD) might lead to a situation of information overload, and we may wonder how much is too much. Gouws & Tarp [2017: 394] warn against the “nature and extent of the data provided in online dictionaries – where overloading of data can be an inviting trap” and insist that there is an important balance to be reached between the storage space and the presentation space. As underlined by Ptasznik [2023: 45], further research by metalexicographers on the phenomenon of confusion possibly induced by this type of presentation is needed, especially in an experiment carried out in real-life conditions as we suggested above.
114All in all, the ideal number of sentences is hard to establish and, again, it might depend a lot on the user profile but also on a number of parameters such as the nature of the translation task itself, the conditions within which the task is performed, the stage of the translation process, etc. Ptasznik’s study, which tested whether eight examples helped or confused learners of English (as opposed to three examples only), concluded that
the upper-intermediate and advanced language users did not seem to be troubled by the excess of information offered to them in the translation tasks, and the availability of either three or eight examples led to approximately identical translation accuracy. […] [T]he availability of eight examples makes it marginally easier for the students to get necessary information, as opposed to the availability of only three examples [2023: 45].
115In the light of these findings, the choice made by COBUILD of providing a first basis of ten sentences, and of the MW to provide eight web-sourced examples seems reasonable enough. But the best option regarding number probably lies in customization options, with “see more/see less” options as can be found in OALD and MW, which could even be refined in the same way as corpus tools allow users to see a customized number of concordance lines. This corroborates Varantola’s idea that professional users are “capable of deciding when they have seen enough77” [2002: 37].
116But even more than a precise number, what really matters is whether the examples have been sorted or not, as will be discussed below.
117Our analysis has shown that the choice of the noun phrase or, in most cases, of the sentence as the unit of segmentation is far from always being satisfactory, something which has also been evidenced for TMs and MT (see Section 1.4.), since translators aim to achieve equivalence “at text level” (Bowker [2006: 176]).
118Translators who participated in the Cristal project experiment (Picton et al. [2018: 126]) were divided over the question of the ideal length of the contexts displayed while they were translating – the general answer was “neither too short nor too long”. Some “stressed the importance of switching to texts in full length, as is possible with usual concordancers”. An ideal solution might then be to have a clickable searchword which, when clicked, gives access to the full text where the context is taken from (provided there are no copyright issues). This is partly implemented in the French Robert Dico en ligne, for instance, or in the term bank IATE78, where the sources include hyperlinks. In the Robert, the sentence that is presented in the dictionary entry is highlighted in the text it is extracted from, which makes it easier to spot. This format might be preferable to the concordance format which is more meaningful to linguists than to lay users, as stated by Asmussen [2014: 1085]. According to him, the concordance format “does not make it clear where to find the required linguistic information”.
119A middle course could be to resort to the paragraph as the basic unit of segmentation, as suggested by Bowker [2006: 184], but this might be harder to implement in the end than access to the full text.
120It seems hardly debatable that extra examples should be taken from (carefully compiled) corpora, but Web-sourced examples prove to be an additional asset in the case of neology, provided they come from reliable sources. Their usefulness for translators would be greatly enhanced if metadata were systematically provided and detailed, as we saw in Section 3.4.2.2. This is what the Robert Dico en ligne has implemented, with examples taken from reliable newspapers and magazines (Ouest France, Géo, Capital, ça m’intéresse) or academic databases (Cairn) and whose references are presented, as can be seen in Figure 1079.
- 80 The metadata provided by the Robert Dico en ligne dictionary is undoubtedly an asset; however, one (...)
Figure 10. Example sentences for lave, noun in the Robert Dico en ligne80
121Back in the 1980s and 1990s, both Hanks and Varantola were very assertive about the role of the lexicographer:
Should electronic dictionaries then give an unlimited number of usage examples to suit every possible user need? It can certainly be argued that there is no point in giving the user an unsorted collection of examples. It is the lexicographer’s duty to crystallize the corpus evidence into well-chosen examples and to give the user a good breakdown of the lexical item’s behaviour, both its prototypical behaviour and its range, as well as its semantic mobility. (Varantola [1994: 608])
A vast citation file is no substitute for the judgment of the lexicographer. (Hanks [1979: xxxv])
- 81 The step-by-step analysis of concordances and the way they were used for the selection of examples (...)
- 82 Our emphasis.
122Those two quotations, which were written way before dictionaries went online, illustrate what is, in our eyes, the main issue in the current implementation of the additional example feature in most dictionaries: authenticity alone is not enough, as Hanks [2012: 68] reasserted in 2012. The very large number of additional examples might be more of a (marketing?) ploy than a truly useful feature. Users are, in a way, asked to take on the role of the lexicographer, i.e., to scan long lines of concordances to try and decipher meaning81 and to identify recurrent patterns, but without having access to the same tools (corpora and corpus-analysis tools such as concordancers), and, in some cases, without even having been taught about the differences between a dictionary and a corpus (and also, for trainee translators, between other translation technologies such as TMs and MT, as we saw earlier). In addition to the challenges related to the format and the user interface in (so-called) dictionary-cum-corpus tools that we mentioned in Section 4.1.3., Asmussen [2014: 1084-1087] warns against the “difficulties a non-expert corpus user may encounter with regard to […] the interpretation of query results”. When corpus examples are not explicitly linked to dictionary data (as is the case for at least three dictionaries in which there is no link between the extra examples and specific sense divisions), the risk that users may misunderstand corpus results (and encounter contradictions between the lexicographic data and the corpus material) is not insignificant. The analytical skills required for the accurate understanding of the examples might be too demanding for trainee translators, while such a time-consuming task might put off professional translators. In fact, what translators really need is “better access to well-selected82 raw data (e.g., access to a representative range of “real examples”) for deductive decision-making” (Varantola [1994: 37]).
123Systematic sense-mapping as is done in OALD is probably what is most relevant for users. However, it seems hard to implement on a large scale. In 2012, Hanks stated that “analysing meaning in context is a skill that is still in its infancy” [2012: 64] – so the full automation of sense disambiguation or sense mapping is still far from being a reality, as underlined by Bothma & Gouws [2022: 76].
124It might be more fruitful to investigate other ways of filtering information. For instance, the Cristal project aimed at providing translators with automatically retrieved corpus contexts that contained terminological relation markers, as these Knowledge-Rich Contexts were deemed very useful for translators (Josselin-Leray et al. [2014]). Those contexts were available within a CAT tool. Let us mention here that in 2016, a Sketch Engine plugin for CAT tools was created, but it was discontinued a few years later, probably due to the fact that translators did not feel comfortable with it.
125However, two features now found in the current version of Sketch Engine should be brought to the translator’s and the lexicographer’s attention. One is the Word Sketch tool, which provides an instantaneous picture of a word’s collocates (see Section 1.4.2.) without having to scan all the concordance lines, and which could easily be made available to translators within a dictionary entry. Another tool, mostly aimed at lexicographers, is the GDEX tool, a tool that was first developed in the preparation of the Macmillan Learner’s Dictionary (Kilgarriff et al. [2008], Kosem et al. [2019]). GDEX stands for “Good Dictionary EXamples”. It is “a system for evaluation of sentences with respect to their suitability to serve as dictionary examples or good examples for teaching purposes”83. Concordances extracted from corpora are “evaluated with respect to their length, use of complicated vocabulary, presence of controversial topics (politics, religion…), sufficient context, references pointing outside of the sentence (e.g., pronouns), brand names and other criteria”. A major advantage is that, according to the manual, it can “rule out poor candidates” (like the ones we identified in Section 3.5.2.) and it “offers the lexicographer a pre-selected set of sentences with a much higher chance of containing good sentences for the purpose of dictionary examples”. Applying the GDEX tool to “additional” examples, and not just to “regular” examples, could make it possible to automatically discard a number of irrelevant examples, e.g., examples (such as example (1) found in Table 9) which are too short and start with “but” immediately followed by an anaphoric pronoun such as it or examples that contain proper nouns (“Schlieren” in example (7)) or words that are unknown to NLP tools. Customizing GDEX according to one’s own criteria, though, is not yet possible within SketchEngine, which rules out a number of potentially interesting contexts for translators right from the start: what if they needed contextual information when translating a document on a controversial topic such as politics or religion?
126Our study had a rather limited scope with only ten words, and only focused on one part of speech, nouns. Further studies could cover more words and other parts of speech such as verbs and adjectives, whose lexicographical treatment is often more problematic for translators. Studying series of words belonging to the same type (i.e., words belonging to a given semantic field, words with a formal or an informal register) might also reveal some interesting tendencies. A comparison of the treatment of the additional example feature by monolingual dictionaries in other languages (e.g., the French Robert Dico en ligne) could also provide some valuable insight.
127As highlighted by Zetzsche, translators are “very, very diverse and versatile” [2023: 126]. The analysis we carried out here could be refined depending on the translator’s profile. Three distinctions at least seem particularly relevant. First, a more clear-cut distinction between seasoned translators and trainee translators would be beneficial when analyzing the usefulness and usability of the additional example feature. Indeed, seasoned translators who are more familiar with the terminology and concepts in their field of specialization, and who might use dictionaries less than other tools such as TMs, are more likely to be interested in saving time when they look up words in dictionaries; whereas trainee translators might make a more extensive use of ‘traditional’ online dictionaries and be more interested in the information they contain than on their time-saving potential. Second, a distinction between in-house translators and freelance translators could also prove pertinent. While the former might be more familiar with a number of translation technologies and tools, as suggested by the findings of Picton et al. [2015], the latter might need more user-friendly material. The relevance of introducing a proper concordance feature in dictionaries might be questioned and seriously investigated based on the user profile, as Rundell ([2015: 319]) had already pointed out. Finally, another criterion that needs to be taken into account is the directionality of the translation: the usefulness of the additional example feature might greatly vary depending on whether translators are translating into their first or second language. Varantola [2000] showed how specialized corpora – where most extra examples are taken from – have high “reassurance value”, particularly where the target text is in the translator’s L2, insofar as they illustrate similar contexts to those the translator is working on. The notion of context similarity is also critical for translators, as will be explained below (Section 4.3.).
128Most importantly, our discussion of the usefulness of the various characteristics of the additional examples for translators is based on a number of assumptions regarding their needs, as they have been described in translation studies or as evidenced by surveys (see Section 1.2.). It would be worth confronting our findings with an empirical study. Translators with various profiles (e.g., trainee vs. professional, in-house vs. freelance) could be asked to perform a real translation task in an ecologically valid, or nearly ecologically valid, environment, i.e., with all the tools they normally have at their disposal (including other dictionaries, term banks, CAT tools, MT…) – unlike what Ptasznik [2023] did, where the experimental settings were rather artificial, as he himself acknowledges [2023: 147]. In that context, they could assess the usefulness and the usability of the extra example feature, in complementarity with the contextual information retrieved from other tools and the features found in those tools (e.g., WordSketch), in the same way as the usefulness of Knowledge-Rich Contexts automatically retrieved from a specialized corpus was assessed within the framework of the Cristal Project (Picton et al. [2018]), using logs, screen recordings, online surveys and retrospective verbalization tasks.
129Several lexicographers and translators have written extensively about the translator’s dream tool (Atkins [1996], De Schryver [2003], Orozco-Jutorán [2017]), which very often includes access to a corpus and corpus-analysis tools such as concordancers. Since translators still seem rather reluctant to use these tools (see Section 1.4.2.), it might be a better idea to start from what they are mostly familiar with (CAT tools) or becoming more and more familiar with (MT). As mentioned earlier, contexts that are considered most useful by translators are contexts that are similar to the texts they are translating. The technology that allows context matching and fuzzy matching in CAT tools is based on the very notion of similarity, so it might be interesting to see how this technology could be used to match the text to be translated with “resembling” contexts in the Source Language but also in the Target Language in large corpora available within the CAT tools. Bothma & Gouws [2022: 58-70] thus suggest “mapping […] a word in a text and an item in an e-dictionary article”, emphasizing that the presence of metadata could be especially helpful [2022: 76].
- 84 For more information on the function of word embeddings for machine translation, see Poibeau [2017: (...)
130Word embeddings, on which Neural Machine Translation is based, also have to do with similarity as they are used to calculate or predict semantic similarity between words, terms or sentences84, so their role in providing useful contexts for translators could be further investigated.
131The aim of this paper was to investigate the usefulness and usability for translators of the additional example feature found in online monolingual learner’s English dictionaries. This feature was first introduced by lexicographers to address the “typical complaint about dictionaries […] that [dictionaries] do not give enough usage examples” (Varantola [1994: 607]), a complaint that was voiced specifically by translators. The possibilities opened up by the availability of huge amounts of data from carefully designed corpora or even the Web need to be closely monitored if lexicographers do not want this feature to be reduced to just smoke and mirrors. What a rather detailed study of the 1,306 additional examples provided in five dictionaries has shown is that more is not always better, and that the translator’s needs regarding contextual information are still only partially met in the age of e-lexicography. We agree with Tarp & Gouws [2019: 264] that “full contextualization is still a challenge to modern lexicography”.
132These findings should be corroborated or invalidated by evidence from additional empirical studies where translators would deal with this feature in the context of a real translation task, at various stages of the translation process, and in conjunction with all the other tools at their disposal. Some findings of the Cristal project highlighted in Picton et al. [2018: 126] have indeed revealed that corpus-extracted contexts are “only really useful when they complement or are completed by other resources that should be integrated in the translator’s environment”.
133We believe, as Atkins already stressed in 1996, that users’ needs are paramount in dictionary-making. Producing a multi-purpose customizable tool that better meets translators’ needs might start with listening to what they have to say about the tools they use daily, especially translation technologies, and observe the way they interact with these. Translation workplace-based research (see Ehrensberger-Dow & Massey [2020]) could probably cast even more light on the subject.