Navigation – Plan du site

AccueilNuméros26Managing Noise in Part-of-Speech ...

Managing Noise in Part-of-Speech Tagging for Extremely Low-Resource Languages: Comparing Strategies for Corpus Collection and Annotation in Dagur and Alsatian

Delphine Bernhard et Joanna Dolińska

Résumés

Bien que le dagur et l’alsacien représentent deux familles de langues typologiquement éloignées, ils partagent plusieurs similitudes : les deux langues sont en danger, n’ont pas de système orthographique unifié, et ont peu de corpus numériques disponibles. Compte tenu de ces défis, l’objectif principal de cet article est de comparer le bruit dans les corpus de ces deux langues et son impact sur l’annotation et l’étiquetage des parties du discours (POS). Nous discutons d’abord des stratégies qui peuvent être utilisées pour réduire le bruit dû aux incohérences orthographiques observées lors de la collecte des corpus, en utilisant le dagur comme exemple. Nous observons ensuite que les distributions des trigrammes POS dans les corpus manuellement annotés de dagur et d’alsacien sont similaires à celles des langues typologiquement apparentées dans UD v2.12, ce qui justifie l’expérimentation d’approches de transfert zéro-shot pour l’étiquetage morphosyntaxique. Nous évaluons quelques stratégies simples de réduction du bruit pour l’étiquetage morphosyntaxique en utilisant l’exemple des dialectes alsaciens et en nous basant sur leur proximité avec l’allemand standard. Les résultats obtenus confirment le rôle important de la proximité linguistique dans l’étiquetage morphosyntaxique et l’efficacité de la méthode de transformation des données que nous proposons. Cependant, ils invitent également à une interprétation plus poussée des capacités des modèles multilingues.

Haut de page

Notes de l’auteur

The authors would like to acknowledge the High Performance Computing Center of the University of Strasbourg for supporting this work by providing scientific support and access to computing resources. Part of the computing resources were funded by the Equipex Equip@Meso project (Programme Investissements d'Avenir) and the CPER Alsacalcul/Big Data. In addition, the authors acknowledge the support of the National Science Centre in Poland for awarding the Miniatura-6 grant 2022/06/X/HS2/01374 to Joanna Dolińska, as well as the support of the French National Research Agency (ANR-21-CE27-0004 DIVITAL project) which enabled the authors to start and conceptualize the research presented in this article.

Texte intégral

1. Introduction

1Extremely low-resource languages are often predominantly oral languages, without orthographic standards or with various competing standards. The lack of spelling standards does not mean that these languages are not or cannot be written, but that the collection, documentation and annotation of corpora must take account of numerous orthographic variants, which can be regarded as a kind of “noise” (Al Sharou et al. 2021). Extremely low-resourced languages are also affected by common categories of noise in Natural Language Processing (NLP): code-switching, spontaneous speech on social media, overrepresentation of one particular variety of a given language in text, etc. All of these phenomena add up and are often difficult to disentangle, e.g. spelling variation due to a writer’s own preferences or used to represent a specific variant (Millour 2020). Moreover, in the context of cross-lingual transfer learning where models trained in one language are directly applied to another language, research has shown that the linguistic proximity of languages is a key factor in achieving better results. Thus, if we consider that “noise” refers to any kind of “unexpected” or “non-standard” content (Al Sharou et al. 2021), then applying cross-lingual transfer techniques to new unrelated languages is tantamount to applying the models to noisy data.

  • 1 We only include Alemannic Alsatian dialects in the corpus and not Franconian varieties.

2In this article, we focus on two extremely low-resource languages, Dagur and Alsatian. While these two languages are typologically unrelated, both are affected by noise in similar ways, providing differing points of views on related issues. The Alsatian Alemannic dialects1 are spoken primarily in Northeastern France and are related to other Alemannic dialects in Europe. The number of speakers is in constant decline and inhabitants of the Alsace region tend to favour the French language. Dagur is the easternmost Mongolic language spoken by around 130,000 speakers mainly in the northeastern part of China (Yamada 2020). There is no widely used written standard for Dagur, even though it has been written down in the Manchu script (Tsumagari 2005), as well as in Cyrillic and Latin (Yamada 2020).

3In this article, we compare the noise in corpora for Alsatian and Dagur and its impact on part-of-speech (POS) annotation and tagging by addressing the following research questions:

  • RQ1: What strategies can be used to deal with noise during corpus collection in an extremely low-resource setting? Two divergent choices can be made here: either to “accept” noise, or to reduce it. Our work addresses both strategies: in Alsatian, the texts are kept “as is”, while the Dagur documents are all written down in Cyrillic, with spelling uniformisation.

  • RQ2: What strategies can be used to handle noise for automatic POS tagging? We highlight the importance of using training data in closely related languages. We also show how simple pre-processing procedures can be applied to improve results in un-normalised corpora, by making the low-resource Alsatian dialects closer to the better-resourced German language.

2. Corpora and Manual Annotation

2.1. Corpora

4The datasets used in this work consist of two corpora for Alsatian and Dagur.

5The Dagur corpus contains 4,502 tokens and 550 sentences. It includes excerpts from Todaeva (1986) written in Cyrillic script and texts from Martin (1961) in Latin script.

6The Alsatian corpus contains 12,582 surface tokens and 12,907 syntactic words.2 It consists of texts from two main sources: Wikipedia articles from the Alemannic Wikipedia (which were explicitly categorized as Alsatian) and chronicles written for a local General Council magazine. In addition, it contains an excerpt from a theater play and some recipes.

2.2. Script and Spelling

7When collecting corpora for Alsatian and Dagur, the presence of noise primarily results from the lack of common spelling standards and the individual choices made by the authors or persons who compiled and transcribed the texts. For Alsatian, only the Latin script has been used, though Latin Fraktur can be found for older documents. It has no commonly accepted spelling standard, despite several proposals for a unified Alsatian spelling, such as ORTHAL (Zeidler & Crévenat-Werner 2008). Moreover, the Alsatian dialects are used mainly orally and written production is rather limited. Diverging choices were made for the two languages. For Alsatian, the corpus was collected from different sources and authors (Bernhard et al. 2018), and spelling diversity was retained without any attempt at normalisation, except for the correction of clear spelling errors (Bernhard et al. 2021): this choice was made so as to obtain a dataset representative of the diversity found in Alsatian texts. In other words, the decision was made to “embrace” noise. As a consequence, noise arising from spelling inconsistencies is handled at later stages of corpus processing, in particular for automatic POS annotation (see Section 3.2). For Dagur, the accepted strategy was to transcribe the source in the Latin script into Cyrillic, closely following the transliteration pattern introduced by Todaeva (1986). Besides, the only other Mongolic language represented in the Universal Dependencies dataset is Buryat, which also uses Cyrillic. It has been noticed that various Dagur resources written in Cyrillic script (Todaeva 1986, Tsybenov & Tumurdei 2014) represent different orthography. In one of the analysed stories even some words are transliterated in two different ways by the same author, such as “yausan” and “yavsan” in the story “Khaniikaa” (Todaeva 1986). The above-mentioned instances of “noise” were resolved during data collection, so as to unify the spelling used in the corpus. The selected spelling option reflected the prevalent variation in the analysed resources, which was “yausan” in this case. Furthermore, the annotator of Dagur decided to consistently apply the spelling “-au-” instead of “-av-” because it better reflects the Dagur pronunciation.

2.3. Manual Annotation

8The Alsatian and Dagur corpora were manually annotated with POS, following the Universal Dependencies guidelines for POS tags.

9The Dagur corpus was annotated by one non-native speaker of Dagur who followed the guidelines detailed for the Buryat language on the UD Homepage (Badmaeva & Tyers 2017, 2023). The manual annotation of corpora for extremely low-resource languages faces the challenge of finding annotators, who are both proficient in the language and versed in linguistic annotation tasks. As shown by Millour (2020), crowdsourcing can be a valid option, but requires a close contact to speaker communities and, so far, the attempts to find native speakers of Dagur language as consultants or co-annotators were not successful. However, it is envisaged that Dagur native speakers could share their views on the corpus, use it for documentation purposes, as well as potentially expand it.

10For Alsatian, the corpus was annotated by a main annotator and revised by a second annotator with the help of two experts (Bernhard et al. 2018). A small part of the corpus was annotated by several annotators to perform an inter-annotator agreement study: the Kappa coefficient ĸ ranged from 0.82 to 0.93 depending on the annotator pairs.

2.4. Data Statement

11We refer the reader to the following articles for the full documentation of the Alsatian corpus (Bernhard et al. 2018) and (Bernhard et al. 2021), while providing the data statement for the Dagur corpus, which is an extension of the corpus presented in Dolińska and Bernhard (2024). It complies with the template proposed by Bender and Friedman (2018), the goal of which is to create ethically responsive NLP tools. Furthermore, data statements are designed to avoid exclusion, overgeneralization, and underexposure of given language communities.

12Curator rationale: Both sources Martin (1961) and Todaeva (1986) represent different Dagur varieties, were recorded at different points of time and include sections on phonology, morphology and a Dagur-Russian (Todaeva 1986) and Dagur-English, English-Dagur (Martin 1961) lexicons.

13Language variety: Texts from Todaeva (1986) include oral Dagur literature of a group of persons, while materials from Martin (1961) are based on an idiolect of one Dagur speaker.

14Speaker demographic: Speakers in Todaeva (1986) represent the Dagur Butha variety, but they are most likely bilingual and are also fluent in Chinese. They are probably elderly persons pursuing agricultural model of subsistence. It is not clear how many speakers shared their stories with B. Kh. Todaeva. Martin (1961) includes an idiolect of a middle-aged Dagur speaker Peter Uregungee Onon, who was born in Bokore-cien in Inner Mongolia. P. U. Onon was a fellow at the The Johns Hopkins University, when Austin (1952) recorded and analysed his speech. Soon after that, S. E. Martin further carried out research on these records with the participation of P. U. Onon (Miller 1952).

15Annotator demographic: The annotator is a female expert in Mongolic languages with experience in computational linguistics.

16Speech situation: Time and place: Butha Dagur oral literature was most likely collected by B. Kh. Todaeva in the 1980’s in Inner Mongolia (China). The idiolect of P. U. Onon was recorded by the 1950’s at the latest in the USA (Austin 1952). Modality (spoken/signed/ written): The texts represented in the Dagur corpus were recorded orally. Scripted/edited vs. spontaneous: Materials collected by B. Kh. Todaeva were scripted and edited. The idiolect of P. U. Onon was recorded by W. M. Austin and later edited by S. E. Martin. Synchronous vs. asynchronous interaction: S. E. Martin’s work was based on a synchronous interaction with a Dagur speaker. The type of interaction in case of B. Kh. Todaeva is not known from the sources.

17Intended audience: Intended audience of Todaeva (1986) are non-Dagur speakers interested in Dagur folklore and language. The addressees of Martin (1961) are primarily linguists.

18Text characteristics: Todaeva (1986) includes folktales, riddles and proverbs with a lively language. They mention numerous names of plants, animals and descriptions of landscapes. Martin (1961) represents concise sentences with basic Dagur vocabulary that could serve as a textbook for learning Dagur.

2.5. Annotation Consistency with Related Languages

19Following earlier work on transfer learning (Rosa & Žabokrtskỳ 2015), we measure whether closely related languages have similar distributions of sequences of POS tags, which could reflect their morphosyntactic proximity and the consistency of POS tag sequences in related languages.

20To this aim, we use the KLcpos3 measure to assess whether the distributions of POS trigrams in our corpora are similar to those of typologically related languages in UD v2.12. KLcpos3 is based on the Kullback-Leibler divergence of the distributions of POS trigrams (Rosa & Žabokrtskỳ 2015). It has been proposed in the context of source treebank selection for delexicalized parsing and used to measure the annotation consistency of different treebanks for the same language (Aggarwal & Zeman 2020). Here we use it to assess the divergence in POS trigram distributions: the values which are closest to zero correspond to closer distributions. The languages with the lowest KLcpos3 are shown in Table 1. For Alsatian, five out of six languages in the table are Germanic languages, with German and Swiss German being closest. For Dagur, Buryat obtains the lowest KLcpos3 value. The list includes Uyghur, Turkish, Dravidian and Indo-Aryan languages spoken in India (Marathi, Telugu and Tamil).

21The lowest KLcpos3 value for Buryat confirms what we know about the Dagur language: it belongs to the Mongolic language family, which also includes the Buryat language. The presence of Uyghur and Turkish in the group of languages with the lowest KLcpos3 value can contribute to the scientific discussion on the genetic affinity between the Mongolic and Turkic languages (Robbeets et al. 2021). Manual annotation and similar POS trigrams distributions for Dagur and Turkic languages prove that these languages are syntactically very close to each other.

Table 1. Lowest KLcpos3 values for Alsatian and Dagur

Alsatian

Dagur

German

0.56

Buryat

0.81

Swiss German

0.71

Marathi

0.99

Dutch

0.72

Uyghur

1.08

French

0.92

Telugu

1.15

English

0.93

Tamil

1.24

Afrikaans

0.97

Turkish

1.47

3. Automatic POS Tagging

22The Alsatian and Dagur corpora are used in automatic POS tagging experiments, in order to evaluate different zero-shot settings. Large multilingual language models lend themselves to transfer approaches which do not require annotated resources for the target language (zero-shot). These approaches are supposed to be particularly useful for languages with few resources. In this section, we compare two different zero-shot methodologies for automatic POS tagging, in order to understand and mitigate the effects of noise and the lack of numerous resources for Dagur and Alsatian.

23The first method is applied to Dagur and consists in using models trained on a combination of two languages: a non-Mongolic language and Buryat. We repeat the experiments performed by Dolińska and Bernhard (2024) on our larger Dagur corpus. We seek to confirm some observations made on transfer performance for POS tagging extremely-low resource languages. Lauscher et al. (2020) showed that transfer performance is primarily affected by the similarity in syntactic properties between source and target language. This analysis was confirmed by de Vries et al. (2022) who investigated zero-shot cross-lingual transfer learning with multilingual pre-trained models for the task of POS tagging. XLM-RoBERTa (Conneau et al. 2020) is used as the multilingual pre-trained model. They used 65 source languages for training and 105 target languages for testing. De Vries et al. (2022) show that the inclusion of the target language –and, to a lesser degree, the source language– in the training dataset for the multilingual pre-trained model is of particular importance. Being part of the same language family also has an effect on the accuracy, as well as sharing the writing systems. Blum (2022) presents zero-shot experiments for languages from the low resource Tupían family. Their results also show that the proximity of languages is a strong predictor of performance and that combining several related languages can also be useful.

24The second method is applied to Alsatian and aims to further improve the performance of zero-shot transfer by using data transformation to reduce the noise caused by spelling inconsistencies (Bernhard 2023). Several automatic approaches have been proposed and they usually rely on data transformation techniques that can be applied at different levels: pre-training corpus for the language model, data used for fine-tuning on the target task, target data (Aepli & Sennrich 2022, Bernhard & Ligozat 2013, Blaschke et al. 2023, Hana et al. 2011, Lothritz et al. 2022, Wang et al. 2022). They involve transforming data from a language known to the model to bring it as close as possible to the target language or, conversely, transforming data from the target language to bring it closer to a language in the model. In this article, we evaluate the latter type of approach, using Alsatian-German bilingual lexicons in particular. We also employ a simple diacritic elimination procedure in order to reduce spelling variation in Alsatian.

25In the following sections, we detail our experiments with the Dagur and Alsatian corpora.

3.1. Dagur

26For Dagur, we directly applied the models trained by de Vries et al. (2022), which were all trained on non-Mongolic languages, to the Dagur corpus and continued training these models on Buryat. We analysed the results in three different zero-shot settings (see Figure 1): (1) Unrelated zero-shot: model trained on an unrelated language, (2) Related zero-shot: model trained on the related Buryat language and (3) Unrelated+related zero-shot: a combination of training on an unrelated language + Buryat.

Figure 1. POS tagging methods for Dagur

Figure 1. POS tagging methods for Dagur
  • 3 Buryat has not been used as training data for fine-tuning by de Vries et al. (2022) due to the smal (...)

27We selected 10 best source languages for transfer based on an analysis of the results obtained on a subpart of the Dagur corpus (6 documents): Ancient Greek, Basque, Faroese, Icelandic, Latin, Polish, Romanian, Telugu, Turkish and Uyghur. Table 2 shows the results obtained by directly applying the POS tagging models trained by de Vries et al. (2022) to the Dagur corpus (column ‘base.’) and the new results obtained by continuing the training of these models on the Buryat UD corpus v. 2.12 (Badmaeva & Tyers 2023) (column ‘Buryat’). The last line in the table also shows the results obtained by fine-tuning the XLM-RoBERTa base model (Conneau et al. 2020) on the Buryat UD corpus.3

Table 2. Accuracy before (base.) and after (Buryat) fine-tuning on the Buryat UD corpus. ∆ corresponds to the increase over the base. column. Results for the fine-tuned models are averaged over 5 training runs and the standard deviation is reported (std). Languages present in the XLM-R pre-training data are underlined

Source language

base.

Buryat

std.

Ancient Greek

51.84

61.48

9.64

0.38

Basque

57.91

61.86

3.95

0.23

Faroese

52.00

61.05

9.05

0.70

Icelandic

53.58

62.25

8.68

0.47

Latin

53.38

61.95

8.57

0.66

Polish

53.02

61.44

8.42

0.55

Romanian

54.46

62.02

7.55

0.78

Telugu

55.31

60.16

4.86

0.34

Turkish

56.13

61.75

5.62

0.42

Uyghur

56.57

61.97

5.40

0.43

Buryat only

61.78

0.34

28The results show that training on Buryat yields improvements over training on unrelated languages: there is an increase of 3.87 accuracy points over the results obtained by training on Basque, which is the best source language. Besides, combining an unrelated language with Buryat leads to improvements for all source languages, of up to 9.64 accuracy points, with languages unavailable in the XLM-R pre-training corpus (Ancient Greek and Faroese) showing the highest gains. The best results overall are obtained by combining Icelandic with Buryat. This shows that a combination of languages, even if one of them is distant from the target language, may be slightly beneficial in a zero-shot cross-lingual setting. However, this might simply be due to an increase in the amount of training data and the gain over training on Buryat only is still low (from 61.78 to 62.25).

Figure 2. Comparison of F1 scores for individual POS with the models trained on Basque and Buryat

Figure 2. Comparison of F1 scores for individual POS with the models trained on Basque and Buryat

29We also compare the F1-score by POS tag for the best unrelated language (Basque) and Buryat in Figure 2. Unsurprisingly, the highest score is obtained for the PUNCT (punctuation) category, which is not specific to a given language and can be learned rather efficiently from other, even distant, languages. The figure also shows that training on Buryat is especially useful for the PART (particle) and PRON (pronoun) categories and, to a lesser extent, ADJ (adjective), ADP (adposition), ADV (adverb) and AUX (auxiliary). Basque and Buryat are almost on par for the CCONJ (coordinating conjunction), DET (determiner), NOUN, NUM (numeral), PUNCT and VERB categories. These results can be explained by several features shared by Basque and Buryat. Basque is a “morphologically agglutinative language […] with no grammatical gender and basic SOV word order” (Ugarte 2020). Furthermore, it has a “pre-verbal word order, preposed modifiers, a rich case system, a highly regular agglutinating morphology with few alternations, an absence of prefixes” (Trask 1998). These features apply to Buryat too, which means that morpho-syntactically, these two languages show a high level of resemblance. Of course, it does not imply that they originated from the same source language, but the results of the F1-score comparison by POS tag suggest that the involvement of both Buryat and Basque languages in further data training on larger corpora could be beneficial for both languages.

3.2. Alsatian

30For Alsatian, we seek to take advantage of the similarities between languages, in particular the similarities between Alsatian and German.

31We transform our Alsatian corpus to approximate German using three simple procedures: Accentuation (A): removal of vowel diacritics specific to Alsatian dialects and absent in German; Closed classes (C): use of a lexicon of conversion from Alsatian into German consisting of 133 forms belonging to the closed classes (Bernhard & Ligozat 2013); Open classes (O): use of a lexicon of 6,699 Alsatian-German word pairs (Bernhard 2014, 2021).

32These procedures can also be combined to maximise the number of transformations on the input corpus. Table 3 summarizes the number and percentage of words transformed by each procedure, while Table 4 gives examples.

Table 3. Number and percentage of syntactic words transformed using each procedure. The last column shows the average number of subwords per word after tokenization with XLM-R-base

Procedure

# transformations

% words

# subwords / word

None

0

0%

1.92

A

2,586

20%

1.70

C

2,475

19%

1.82

O

749

6%

1.88

AC

4,344

34%

1.66

AO

3,108

24%

1.68

CO

3,127

24%

1.78

ACO

4,804

37%

1.64

Table 4. Examples of sub-words depending on pre-processing

Original text

Mìt

dr

Jugend

ìsch

nit

loos

!

Subwords

M␣ì␣t

dr

Jugend

ì␣sch

nit

loo␣s

!

A

Mit

dr

Jugend

isch

nit

loo␣s

!

C

Mit

der

Jugend

ist

nicht

loo␣s

!

O

M␣ì␣t

dr

Jugend

ì␣sch

nada

loo␣s

!

AC

Mit

der

Jugend

ist

nicht

loo␣s

!

AO

Mit

dr

Jugend

isch

nada

loo␣s

!

CO

Mit

der

Jugend

ist

nada

loo␣s

!

ACO

Mit

der

Jugend

ist

nada

loo␣s

!

  • 4 No attempt was made to transform the Alsatian corpus to any language other than German.

33Table 5 details the results, in terms of accuracy, for the 10 source languages that achieve the best results on average for Alsatian and for the different transformations to approximate German.4 For comparison, we also show the results obtained for Swiss German dialects (Aepli & Clematide 2018) with the same source languages (without data transformation): these dialects are very close to Alsatian dialects, especially the Upper Alemannic area south of Alsace, bordering Switzerland.

Table 5. Accuracy (in %) for different preprocessings. The 10 source languages shown are those with the best results on average for Alsatian. Languages present in the XLM-R pre-training data are underlined

Source language

Swiss

Alsatian

A

C

O

AC

AO

CO

ACO

Afrikaans

55.2

53.0

61.0

68.6

55.1

71.6

62.2

69.3

72.0

German

50.2

49.9

58.7

69.5

52.5

73.0

60.4

70.6

73.6

Armenian

46.2

47.6

58.9

65.8

51.2

71.1

61.2

67.8

72.2

Western Armenian

58.2

55.6

65.6

71.0

59.0

74.8

67.7

72.4

75.5

Bulgarian

50.3

48.5

57.8

66.0

51.5

69.8

59.9

67.4

70.8

Faroese

54.1

50.5

59.5

67.5

52.9

71.3

61.2

68.6

72.0

Welsh

49.9

50.4

57.6

66.9

52.4

69.5

58.8

67.8

70.0

Lithuanian

49.7

48.8

57.9

66.4

51.4

69.5

59.4

67.8

70.5

Romanian

53.0

51.8

60.6

69.2

54.9

73.2

62.8

70.4

74.0

Czech

50.8

49.6

58.1

67.9

52.1

71.3

59.4

69.4

72.2

34These results show that simple transformations have a large impact on the results obtained. Deleting accents alone results in an average gain of 7.2 accuracy points over the raw data for all 65 source languages, more than using a lexicon of open-class words (an average gain of 2.1 points). In fact, the latter resource has the lowest impact on results. The grammatical word lexicon increases the accuracy score by an average of 14.1 points across all source languages, confirming the observations of Bernhard & Ligozat (2013). Finally, the best results are obtained by combining all the resources (ACO): average accuracy of 61.0 (+18.7) for the 65 source languages, and 72.1 (+21.7) for the 10 languages presented in Table 5.

35Comparing the data from Tables 3 and 5, we can see that, overall, as the percentage of transformed words increases, the average number of subwords per word decreases and the accuracy score also increases. The decrease in the average number of subwords per word indicates, indirectly, that the data are closer to those used to pre-train the language model tokenizer and thus less “noisy.”

36These results are lower than, but very close to, the accuracy score of 78% obtained by Blaschke et al. (2023) for the same Alsatian corpus, manipulating the German corpus before training the POS tagging model. In our case, we did not re-train the models and used them as they were.

37What is surprising however is that the source language which yields the best results is Western Armenian, which reaches a higher score than German, even after corpus pre-processing. Western Armenian is not included in the corpus of 100 languages used to pre-train XLM-R (Conneau et al. 2020). This was already observed by de Vries et al. (2022) for Swiss German. German is included and accounts for 10,297M tokens, making it the seventh most represented language after English (55,608M), Vietnamese (24,757M), Russian (23,408M), Indonesian (22,704M), Persian (13,259M) and Romanian (10,354M). It should be noted that Romanian obtains the second best result as a source language for tagging Alsatian. De Vries et al. (2022) had already noticed that Romanian was the best source language for any target language and language family in their experiments. This is confirmed again for Alsatian. The under-performance of German as a source language shows that the model is less able to adapt to noise and that it could be somewhat “over-trained” for German. Languages such as Western Armenian or Faroese, which are not part of the pre-training corpus for XLM-R, could on the contrary produce models which are more flexible and less negatively influenced by noise.

4. Discussion and Conclusions

38Dealing with noise in extremely low-resource languages can take several forms. In this article, we have focused on noise due to different scripts, orthographic variants and noise due to the (supposedly lower) transferability of models trained on unrelated languages in zero-shot cross-lingual settings. The results obtained in our experiments confirm previous observations about the decisive role of linguistic proximity and noise reduction techniques. Accuracy before and after fine-tuning on the Buryat UD corpus for Turkish and Uyghur languages is relatively high, which is not an obvious result. In fact, the linguistic discussion on the genetic affinity of Turkic and Mongolic languages still rises controversies and this kind of experiments can certainly contribute to this discussion on how related these two language families are. However, some questions remain as to the interpretation of the capacities of multilingual models. Dagur and Alsatian are representative of different situations in that respect. Dagur and other Mongolic languages are almost not represented in the pre-training corpus of the XLM-R multilingual model, since only Mongolian is included. In this case, training the POS tagging model in a related language such as Buryat is important. On the other hand, several Germanic languages are represented in the XLM-R pre-training corpus: Afrikaans, Danish, German, English, Western Frisian, Dutch, etc. In this case, training on linguistically related languages seems to be more susceptible to noise, in contrast to more distant languages.

39These conclusions should be verified in other languages and for other tasks to see if similar observations can be made in different cases. This could then contribute to a better understanding of the effects of noise stemming from corpora in extremely low-resource languages.

Haut de page

Bibliographie

Aepli N. & Clematide S. (2018). “Parsing Approaches for Swiss German”, Proceedings of SwissText 2018.

Aepli N. & Sennrich R. (2022). “Improving Zero-Shot Cross-lingual Transfer Between Closely Related Languages by Injecting Character-Level Noise”, Findings of the Association for Computational Linguistics: ACL 2022, 4074–4083. https://doi.org/10.18653/v1/2022.findings-acl.321.

Aggarwal A. & Zeman D. (2020). “Estimating POS annotation consistency of different treebanks in a language”, Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories, 93–110.

Al Sharou K., Li Z. & Specia L. (2021). “Towards a Better Understanding of Noise in Natural Language Processing”, Proceedings of the Conference Recent Advances in Natural Language Processing - Deep Learning for Natural Language Processing Methods and Applications, 53–62. https://doi.org/10.26615/978-954-452-072-4_007.

Austin W. M. (1952). “A brief outline of Dagur grammar”, Studies in Linguistics 10(3): 65–75.

Badmaeva E. & Tyers F. M. (2017). “Dependency Treebank for Buryat”, Proceedings of the 15th International Workshop on Treebanks and Linguistic Theories (TLT15), 1–12.

Badmaeva E. & Tyers F. M. (2023). UD Buryat-BDT Treebank. Universal Dependencies v2.12. LINDAT/CLARIAH-CZ digital library at the Institute of Formal; Applied Linguistics (ÚFAL), Faculty of Mathematics; Physics, Charles University. https://github.com/UniversalDependencies/UD_Buryat-BDT/tree/master.

Bender E. M. & Friedman B. (2018). “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science”, Transactions of the Association for Computational Linguistics 6: 587–604. https://doi.org/10.1162/tacl_a_00041.

Bernhard D. (2014). “Adding Dialectal Lexicalisations to Linked Open Data Resources: The Example of Alsatian”, Proceedings of the Workshop on Collaboration and Computing for Under Resourced Languages in the Linked Open Data Era (CCURL 2014), 23–29.

Bernhard D. (2021). Lexique multilingue alsacien français allemand relié aux synsets de BabelNet. https://doi.org/10.34847/nkl.3f9b2i11.

Bernhard D. (2023). “Transfert zero-shot pour l’étiquetage morphosyntaxique: analyse de l’impact de la transformation des données à étiqueter pour les dialectes alsaciens”, Actes des 5èmes journées du Groupement de Recherche CNRS “Linguistique Informatique, Formelle et de Terrain”, Nov. 2023, Nancy, France, 30–38.

Bernhard D. & Ligozat A.-L. (2013). “Es esch fàscht wie Ditsch, oder net? Étiquetage morphosyntaxique de l’alsacien en passant par l’allemand”, Actes de TALARE 2013: Traitement Automatique des Langues Régionales de France et d’Europe, 209–220.

Bernhard D., Ligozat A.-L., Bras M., Martin F., Vergez-Couret M., Erhart P., Sibille J., Todirascu A., Boula De Mareüil P. & Huck D. (2021). “Collecting and annotating corpora for three under-resourced languages of France: Methodological issues”, Language Documentation & Conservation 15: 316–357.

Bernhard D., Ligozat A.-L., Martin F., Bras M., Magistry P., Vergez-Couret M., Steiblé L., Erhart P., Hathout N., Huck D., Rey C., Reynés P., Rosset S., Sibille J. & Lavergne T. (2018). “Corpora with Part-of-Speech Annotations for Three Regional Languages of France: Alsatian, Occitan and Picard”, in Calzolari N., Choukri K., Cieri C. et al. (eds.) Proceedings of the 11th edition of the Language Resources and Evaluation Conference, 3917–3924.

Blaschke V., Schütze H. & Plank B. (2023). “Does Manipulating Tokenization Aid Cross-Lingual Transfer? A Study on POS Tagging for Non-Standardized Languages”, Tenth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2023), 40–54.

Blum F. (2022). “Evaluating zero-shot transfers and multilingual models for dependency parsing and POS tagging within the low-resource language family tupían”, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, 1–9.

Conneau A., Khandelwal K., Goyal N., Chaudhary V., Wenzek G., Guzmán F., Grave E., Ott M., Zettlemoyer L. & Stoyanov V. (2020). “Unsupervised Cross-lingual Representation Learning at Scale”, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8440–8451. https://doi.org/10.18653/v1/2020.acl-main.747.

de Vries W., Wieling M. & Nissim M. (2022). “Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages”, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 7676–7685. https://doi.org/10.18653/v1/2022.acl-long.529.

Dolińska J. & Bernhard D. (2024). “POS Tagging for the Endangered Dagur Language”, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 12906–12916.

Hana J., Feldman A. & Aharodnik K. (2011). “A low-budget tagger for Old Czech”, Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, 10–18.

Lauscher A., Ravishankar V., Vulić I. & Glavaš G. (2020). “From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers”, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 4483–4499. https://doi.org/10.18653/v1/2020.emnlp-main.363.

Lothritz C., Lebichot B., Allix K., Veiber L., Bissyande T., Klein J., Boytsov A., Lefebvre C. & Goujon A. (2022). “LuxemBERT: Simple and practical data augmentation in language model pre-training for luxembourgish”, Proceedings of the Language Resources and Evaluation Conference, 5080–5089.

Martin S. E. (1961). Dagur Mongolian Grammar, Texts, and Lexicon. Based on the Speech of Peter Onon. Indiana University.

Millour A. (2020). “Myriadisation de ressources linguistiques pour le traitement automatique de langues non standardisées”, Sorbonne Universite. Français. NNT: tel-03083213v2 https://hal.science/tel-03083213v2/document.

Miller R. A. (1952). “Reviewed Work(s): Dagur Mongolian Grammar, Texts, and Lexicon by Samuel E. Martin”, Journal of the American Oriental Society 82(3): 439–444.

Robbeets M., Bouckaert R., Conte M. et al. (2021). “Triangulation supports agricultural spread of the transeurasian languages”, Nature 599: 616–621. https://doi.org/10.1038/s41586-021-04108-8.

Rosa R. & Žabokrtskỳ Z. (2015). “Klcpos3-a Language Similarity Measure for Delexicalized Parser Transfer”, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 243–249.

Todaeva B. Kh. (1986). Dagurskij jazyk [Dagur]. Nauka.

Trask R. L. (1998). “The typological position of Basque: Then and now”, Language Sciences 20(3): 313–324.

Tsumagari T. (2005). “Dagur”, in Janhunen J. (ed.) The mongolic languages. Routledge, 129–153.

Tsybenov B. D. & Tumurdei G. (2014). Kratkiij Dagursko-Russkij Slovar. Russian Academy of Sciences.

Ugarte I. I. (2020). “Basque among the world’s languages: A typological approach”, in Santazilia E., Krajewska D. & abd Borja Ariztimuño López E. Z. S. R. (eds.) Fontes linguae vasconum 50 urte. Ekarpen berriak euskararen ikerketari / Nuevas aportaciones al estudio de la lengua vasca. Gobierno de Navarra, 329–349.

Wang X., Ruder S. & Neubig G. (2022). “Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation”, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 863–877. https://doi.org/10.18653/v1/2022.acl-long.61.

Yamada Y. (2020). “Dagur”, in Robbeets M. I. & Savelyev A. (eds.) The oxford guide to the transeurasian languages. Oxford University Press, 321–333.

Zeidler E. & Crévenat-Werner D. (2008). Orthographe alsacienne: bien écrire l’alsacien de Wissembourg à Ferrette. Colmar: J. Do Bentzinger.

Haut de page

Notes

1 We only include Alemannic Alsatian dialects in the corpus and not Franconian varieties.

2 The Alsatian corpus is available at https://zenodo.org/doi/10.5281/zenodo.1170128.

3 Buryat has not been used as training data for fine-tuning by de Vries et al. (2022) due to the small size of the dataset: the train set contains 19 sentences and 153 tokens, while the test set contains 908 sentences and 10,032 tokens. In the experiments, we have reversed the data and used the larger test dataset for training and the train dataset for validation. De Vries et al. (2022) have fine-tuned their models using 10K training samples (sentences) and oversampled languages with fewer than 10K training samples using multiple epochs. Their experiments show that accuracies start reaching a plateau with 2.5K training samples, and start decreasing with 10K samples, which they chose as a threshold. We trained the model for Buryat for 10 epochs, which is close to 10K training samples (9,080 samples). We report the average results on the Dagur corpus for 5 training runs.

4 No attempt was made to transform the Alsatian corpus to any language other than German.

Haut de page

Table des illustrations

Titre Figure 1. POS tagging methods for Dagur
URL http://journals.openedition.org/corpus/docannexe/image/9177/img-1.png
Fichier image/png, 444k
Titre Figure 2. Comparison of F1 scores for individual POS with the models trained on Basque and Buryat
URL http://journals.openedition.org/corpus/docannexe/image/9177/img-2.png
Fichier image/png, 57k
Haut de page

Pour citer cet article

Référence électronique

Delphine Bernhard et Joanna Dolińska, « Managing Noise in Part-of-Speech Tagging for Extremely Low-Resource Languages: Comparing Strategies for Corpus Collection and Annotation in Dagur and Alsatian »Corpus [En ligne], 26 | 2025, mis en ligne le 30 janvier 2025, consulté le 08 février 2025. URL : http://journals.openedition.org/corpus/9177 ; DOI : https://doi.org/10.4000/1364t

Haut de page

Auteurs

Delphine Bernhard

Université de Strasbourg, LiLPa UR 1339, F-67000 Strasbourg, France

Joanna Dolińska

University of Warsaw, Faculty of "Artes Liberales", Center for Research and Practice in Cultural Continuity, ul. Krakowskie Przedmieście 26/28, 00-927 Warsaw, Poland

Haut de page

Droits d’auteur

Le texte et les autres éléments (illustrations, fichiers annexes importés), sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search