1Extremely low-resource languages are often predominantly oral languages, without orthographic standards or with various competing standards. The lack of spelling standards does not mean that these languages are not or cannot be written, but that the collection, documentation and annotation of corpora must take account of numerous orthographic variants, which can be regarded as a kind of “noise” (Al Sharou et al. 2021). Extremely low-resourced languages are also affected by common categories of noise in Natural Language Processing (NLP): code-switching, spontaneous speech on social media, overrepresentation of one particular variety of a given language in text, etc. All of these phenomena add up and are often difficult to disentangle, e.g. spelling variation due to a writer’s own preferences or used to represent a specific variant (Millour 2020). Moreover, in the context of cross-lingual transfer learning where models trained in one language are directly applied to another language, research has shown that the linguistic proximity of languages is a key factor in achieving better results. Thus, if we consider that “noise” refers to any kind of “unexpected” or “non-standard” content (Al Sharou et al. 2021), then applying cross-lingual transfer techniques to new unrelated languages is tantamount to applying the models to noisy data.
- 1 We only include Alemannic Alsatian dialects in the corpus and not Franconian varieties.
2In this article, we focus on two extremely low-resource languages, Dagur and Alsatian. While these two languages are typologically unrelated, both are affected by noise in similar ways, providing differing points of views on related issues. The Alsatian Alemannic dialects1 are spoken primarily in Northeastern France and are related to other Alemannic dialects in Europe. The number of speakers is in constant decline and inhabitants of the Alsace region tend to favour the French language. Dagur is the easternmost Mongolic language spoken by around 130,000 speakers mainly in the northeastern part of China (Yamada 2020). There is no widely used written standard for Dagur, even though it has been written down in the Manchu script (Tsumagari 2005), as well as in Cyrillic and Latin (Yamada 2020).
3In this article, we compare the noise in corpora for Alsatian and Dagur and its impact on part-of-speech (POS) annotation and tagging by addressing the following research questions:
-
RQ1: What strategies can be used to deal with noise during corpus collection in an extremely low-resource setting? Two divergent choices can be made here: either to “accept” noise, or to reduce it. Our work addresses both strategies: in Alsatian, the texts are kept “as is”, while the Dagur documents are all written down in Cyrillic, with spelling uniformisation.
-
RQ2: What strategies can be used to handle noise for automatic POS tagging? We highlight the importance of using training data in closely related languages. We also show how simple pre-processing procedures can be applied to improve results in un-normalised corpora, by making the low-resource Alsatian dialects closer to the better-resourced German language.
4The datasets used in this work consist of two corpora for Alsatian and Dagur.
5The Dagur corpus contains 4,502 tokens and 550 sentences. It includes excerpts from Todaeva (1986) written in Cyrillic script and texts from Martin (1961) in Latin script.
6The Alsatian corpus contains 12,582 surface tokens and 12,907 syntactic words.2 It consists of texts from two main sources: Wikipedia articles from the Alemannic Wikipedia (which were explicitly categorized as Alsatian) and chronicles written for a local General Council magazine. In addition, it contains an excerpt from a theater play and some recipes.
7When collecting corpora for Alsatian and Dagur, the presence of noise primarily results from the lack of common spelling standards and the individual choices made by the authors or persons who compiled and transcribed the texts. For Alsatian, only the Latin script has been used, though Latin Fraktur can be found for older documents. It has no commonly accepted spelling standard, despite several proposals for a unified Alsatian spelling, such as ORTHAL (Zeidler & Crévenat-Werner 2008). Moreover, the Alsatian dialects are used mainly orally and written production is rather limited. Diverging choices were made for the two languages. For Alsatian, the corpus was collected from different sources and authors (Bernhard et al. 2018), and spelling diversity was retained without any attempt at normalisation, except for the correction of clear spelling errors (Bernhard et al. 2021): this choice was made so as to obtain a dataset representative of the diversity found in Alsatian texts. In other words, the decision was made to “embrace” noise. As a consequence, noise arising from spelling inconsistencies is handled at later stages of corpus processing, in particular for automatic POS annotation (see Section 3.2). For Dagur, the accepted strategy was to transcribe the source in the Latin script into Cyrillic, closely following the transliteration pattern introduced by Todaeva (1986). Besides, the only other Mongolic language represented in the Universal Dependencies dataset is Buryat, which also uses Cyrillic. It has been noticed that various Dagur resources written in Cyrillic script (Todaeva 1986, Tsybenov & Tumurdei 2014) represent different orthography. In one of the analysed stories even some words are transliterated in two different ways by the same author, such as “yausan” and “yavsan” in the story “Khaniikaa” (Todaeva 1986). The above-mentioned instances of “noise” were resolved during data collection, so as to unify the spelling used in the corpus. The selected spelling option reflected the prevalent variation in the analysed resources, which was “yausan” in this case. Furthermore, the annotator of Dagur decided to consistently apply the spelling “-au-” instead of “-av-” because it better reflects the Dagur pronunciation.
8The Alsatian and Dagur corpora were manually annotated with POS, following the Universal Dependencies guidelines for POS tags.
9The Dagur corpus was annotated by one non-native speaker of Dagur who followed the guidelines detailed for the Buryat language on the UD Homepage (Badmaeva & Tyers 2017, 2023). The manual annotation of corpora for extremely low-resource languages faces the challenge of finding annotators, who are both proficient in the language and versed in linguistic annotation tasks. As shown by Millour (2020), crowdsourcing can be a valid option, but requires a close contact to speaker communities and, so far, the attempts to find native speakers of Dagur language as consultants or co-annotators were not successful. However, it is envisaged that Dagur native speakers could share their views on the corpus, use it for documentation purposes, as well as potentially expand it.
10For Alsatian, the corpus was annotated by a main annotator and revised by a second annotator with the help of two experts (Bernhard et al. 2018). A small part of the corpus was annotated by several annotators to perform an inter-annotator agreement study: the Kappa coefficient ĸ ranged from 0.82 to 0.93 depending on the annotator pairs.
11We refer the reader to the following articles for the full documentation of the Alsatian corpus (Bernhard et al. 2018) and (Bernhard et al. 2021), while providing the data statement for the Dagur corpus, which is an extension of the corpus presented in Dolińska and Bernhard (2024). It complies with the template proposed by Bender and Friedman (2018), the goal of which is to create ethically responsive NLP tools. Furthermore, data statements are designed to avoid exclusion, overgeneralization, and underexposure of given language communities.
12Curator rationale: Both sources Martin (1961) and Todaeva (1986) represent different Dagur varieties, were recorded at different points of time and include sections on phonology, morphology and a Dagur-Russian (Todaeva 1986) and Dagur-English, English-Dagur (Martin 1961) lexicons.
13Language variety: Texts from Todaeva (1986) include oral Dagur literature of a group of persons, while materials from Martin (1961) are based on an idiolect of one Dagur speaker.
14Speaker demographic: Speakers in Todaeva (1986) represent the Dagur Butha variety, but they are most likely bilingual and are also fluent in Chinese. They are probably elderly persons pursuing agricultural model of subsistence. It is not clear how many speakers shared their stories with B. Kh. Todaeva. Martin (1961) includes an idiolect of a middle-aged Dagur speaker Peter Uregungee Onon, who was born in Bokore-cien in Inner Mongolia. P. U. Onon was a fellow at the The Johns Hopkins University, when Austin (1952) recorded and analysed his speech. Soon after that, S. E. Martin further carried out research on these records with the participation of P. U. Onon (Miller 1952).
15Annotator demographic: The annotator is a female expert in Mongolic languages with experience in computational linguistics.
16Speech situation: Time and place: Butha Dagur oral literature was most likely collected by B. Kh. Todaeva in the 1980’s in Inner Mongolia (China). The idiolect of P. U. Onon was recorded by the 1950’s at the latest in the USA (Austin 1952). Modality (spoken/signed/ written): The texts represented in the Dagur corpus were recorded orally. Scripted/edited vs. spontaneous: Materials collected by B. Kh. Todaeva were scripted and edited. The idiolect of P. U. Onon was recorded by W. M. Austin and later edited by S. E. Martin. Synchronous vs. asynchronous interaction: S. E. Martin’s work was based on a synchronous interaction with a Dagur speaker. The type of interaction in case of B. Kh. Todaeva is not known from the sources.
17Intended audience: Intended audience of Todaeva (1986) are non-Dagur speakers interested in Dagur folklore and language. The addressees of Martin (1961) are primarily linguists.
18Text characteristics: Todaeva (1986) includes folktales, riddles and proverbs with a lively language. They mention numerous names of plants, animals and descriptions of landscapes. Martin (1961) represents concise sentences with basic Dagur vocabulary that could serve as a textbook for learning Dagur.
19Following earlier work on transfer learning (Rosa & Žabokrtskỳ 2015), we measure whether closely related languages have similar distributions of sequences of POS tags, which could reflect their morphosyntactic proximity and the consistency of POS tag sequences in related languages.
20To this aim, we use the KLcpos3 measure to assess whether the distributions of POS trigrams in our corpora are similar to those of typologically related languages in UD v2.12. KLcpos3 is based on the Kullback-Leibler divergence of the distributions of POS trigrams (Rosa & Žabokrtskỳ 2015). It has been proposed in the context of source treebank selection for delexicalized parsing and used to measure the annotation consistency of different treebanks for the same language (Aggarwal & Zeman 2020). Here we use it to assess the divergence in POS trigram distributions: the values which are closest to zero correspond to closer distributions. The languages with the lowest KLcpos3 are shown in Table 1. For Alsatian, five out of six languages in the table are Germanic languages, with German and Swiss German being closest. For Dagur, Buryat obtains the lowest KLcpos3 value. The list includes Uyghur, Turkish, Dravidian and Indo-Aryan languages spoken in India (Marathi, Telugu and Tamil).
21The lowest KLcpos3 value for Buryat confirms what we know about the Dagur language: it belongs to the Mongolic language family, which also includes the Buryat language. The presence of Uyghur and Turkish in the group of languages with the lowest KLcpos3 value can contribute to the scientific discussion on the genetic affinity between the Mongolic and Turkic languages (Robbeets et al. 2021). Manual annotation and similar POS trigrams distributions for Dagur and Turkic languages prove that these languages are syntactically very close to each other.
Table 1. Lowest KLcpos3 values for Alsatian and Dagur
Alsatian
|
Dagur
|
German
|
0.56
|
Buryat
|
0.81
|
Swiss German
|
0.71
|
Marathi
|
0.99
|
Dutch
|
0.72
|
Uyghur
|
1.08
|
French
|
0.92
|
Telugu
|
1.15
|
English
|
0.93
|
Tamil
|
1.24
|
Afrikaans
|
0.97
|
Turkish
|
1.47
|
22The Alsatian and Dagur corpora are used in automatic POS tagging experiments, in order to evaluate different zero-shot settings. Large multilingual language models lend themselves to transfer approaches which do not require annotated resources for the target language (zero-shot). These approaches are supposed to be particularly useful for languages with few resources. In this section, we compare two different zero-shot methodologies for automatic POS tagging, in order to understand and mitigate the effects of noise and the lack of numerous resources for Dagur and Alsatian.
23The first method is applied to Dagur and consists in using models trained on a combination of two languages: a non-Mongolic language and Buryat. We repeat the experiments performed by Dolińska and Bernhard (2024) on our larger Dagur corpus. We seek to confirm some observations made on transfer performance for POS tagging extremely-low resource languages. Lauscher et al. (2020) showed that transfer performance is primarily affected by the similarity in syntactic properties between source and target language. This analysis was confirmed by de Vries et al. (2022) who investigated zero-shot cross-lingual transfer learning with multilingual pre-trained models for the task of POS tagging. XLM-RoBERTa (Conneau et al. 2020) is used as the multilingual pre-trained model. They used 65 source languages for training and 105 target languages for testing. De Vries et al. (2022) show that the inclusion of the target language –and, to a lesser degree, the source language– in the training dataset for the multilingual pre-trained model is of particular importance. Being part of the same language family also has an effect on the accuracy, as well as sharing the writing systems. Blum (2022) presents zero-shot experiments for languages from the low resource Tupían family. Their results also show that the proximity of languages is a strong predictor of performance and that combining several related languages can also be useful.
24The second method is applied to Alsatian and aims to further improve the performance of zero-shot transfer by using data transformation to reduce the noise caused by spelling inconsistencies (Bernhard 2023). Several automatic approaches have been proposed and they usually rely on data transformation techniques that can be applied at different levels: pre-training corpus for the language model, data used for fine-tuning on the target task, target data (Aepli & Sennrich 2022, Bernhard & Ligozat 2013, Blaschke et al. 2023, Hana et al. 2011, Lothritz et al. 2022, Wang et al. 2022). They involve transforming data from a language known to the model to bring it as close as possible to the target language or, conversely, transforming data from the target language to bring it closer to a language in the model. In this article, we evaluate the latter type of approach, using Alsatian-German bilingual lexicons in particular. We also employ a simple diacritic elimination procedure in order to reduce spelling variation in Alsatian.
25In the following sections, we detail our experiments with the Dagur and Alsatian corpora.
26For Dagur, we directly applied the models trained by de Vries et al. (2022), which were all trained on non-Mongolic languages, to the Dagur corpus and continued training these models on Buryat. We analysed the results in three different zero-shot settings (see Figure 1): (1) Unrelated zero-shot: model trained on an unrelated language, (2) Related zero-shot: model trained on the related Buryat language and (3) Unrelated+related zero-shot: a combination of training on an unrelated language + Buryat.
Figure 1. POS tagging methods for Dagur
- 3 Buryat has not been used as training data for fine-tuning by de Vries et al. (2022) due to the smal (...)
27We selected 10 best source languages for transfer based on an analysis of the results obtained on a subpart of the Dagur corpus (6 documents): Ancient Greek, Basque, Faroese, Icelandic, Latin, Polish, Romanian, Telugu, Turkish and Uyghur. Table 2 shows the results obtained by directly applying the POS tagging models trained by de Vries et al. (2022) to the Dagur corpus (column ‘base.’) and the new results obtained by continuing the training of these models on the Buryat UD corpus v. 2.12 (Badmaeva & Tyers 2023) (column ‘Buryat’). The last line in the table also shows the results obtained by fine-tuning the XLM-RoBERTa base model (Conneau et al. 2020) on the Buryat UD corpus.3
Table 2. Accuracy before (base.) and after (Buryat) fine-tuning on the Buryat UD corpus. ∆ corresponds to the increase over the base. column. Results for the fine-tuned models are averaged over 5 training runs and the standard deviation is reported (std). Languages present in the XLM-R pre-training data are underlined
Source language
|
base.
|
Buryat
|
∆
|
std.
|
Ancient Greek
|
51.84
|
61.48
|
9.64
|
0.38
|
Basque
|
57.91
|
61.86
|
3.95
|
0.23
|
Faroese
|
52.00
|
61.05
|
9.05
|
0.70
|
Icelandic
|
53.58
|
62.25
|
8.68
|
0.47
|
Latin
|
53.38
|
61.95
|
8.57
|
0.66
|
Polish
|
53.02
|
61.44
|
8.42
|
0.55
|
Romanian
|
54.46
|
62.02
|
7.55
|
0.78
|
Telugu
|
55.31
|
60.16
|
4.86
|
0.34
|
Turkish
|
56.13
|
61.75
|
5.62
|
0.42
|
Uyghur
|
56.57
|
61.97
|
5.40
|
0.43
|
Buryat only
|
61.78
|
0.34
|
28The results show that training on Buryat yields improvements over training on unrelated languages: there is an increase of 3.87 accuracy points over the results obtained by training on Basque, which is the best source language. Besides, combining an unrelated language with Buryat leads to improvements for all source languages, of up to 9.64 accuracy points, with languages unavailable in the XLM-R pre-training corpus (Ancient Greek and Faroese) showing the highest gains. The best results overall are obtained by combining Icelandic with Buryat. This shows that a combination of languages, even if one of them is distant from the target language, may be slightly beneficial in a zero-shot cross-lingual setting. However, this might simply be due to an increase in the amount of training data and the gain over training on Buryat only is still low (from 61.78 to 62.25).
Figure 2. Comparison of F1 scores for individual POS with the models trained on Basque and Buryat
29We also compare the F1-score by POS tag for the best unrelated language (Basque) and Buryat in Figure 2. Unsurprisingly, the highest score is obtained for the PUNCT (punctuation) category, which is not specific to a given language and can be learned rather efficiently from other, even distant, languages. The figure also shows that training on Buryat is especially useful for the PART (particle) and PRON (pronoun) categories and, to a lesser extent, ADJ (adjective), ADP (adposition), ADV (adverb) and AUX (auxiliary). Basque and Buryat are almost on par for the CCONJ (coordinating conjunction), DET (determiner), NOUN, NUM (numeral), PUNCT and VERB categories. These results can be explained by several features shared by Basque and Buryat. Basque is a “morphologically agglutinative language […] with no grammatical gender and basic SOV word order” (Ugarte 2020). Furthermore, it has a “pre-verbal word order, preposed modifiers, a rich case system, a highly regular agglutinating morphology with few alternations, an absence of prefixes” (Trask 1998). These features apply to Buryat too, which means that morpho-syntactically, these two languages show a high level of resemblance. Of course, it does not imply that they originated from the same source language, but the results of the F1-score comparison by POS tag suggest that the involvement of both Buryat and Basque languages in further data training on larger corpora could be beneficial for both languages.
30For Alsatian, we seek to take advantage of the similarities between languages, in particular the similarities between Alsatian and German.
31We transform our Alsatian corpus to approximate German using three simple procedures: Accentuation (A): removal of vowel diacritics specific to Alsatian dialects and absent in German; Closed classes (C): use of a lexicon of conversion from Alsatian into German consisting of 133 forms belonging to the closed classes (Bernhard & Ligozat 2013); Open classes (O): use of a lexicon of 6,699 Alsatian-German word pairs (Bernhard 2014, 2021).
32These procedures can also be combined to maximise the number of transformations on the input corpus. Table 3 summarizes the number and percentage of words transformed by each procedure, while Table 4 gives examples.
Table 3. Number and percentage of syntactic words transformed using each procedure. The last column shows the average number of subwords per word after tokenization with XLM-R-base
Procedure
|
# transformations
|
% words
|
# subwords / word
|
None
|
0
|
0%
|
1.92
|
A
|
2,586
|
20%
|
1.70
|
C
|
2,475
|
19%
|
1.82
|
O
|
749
|
6%
|
1.88
|
AC
|
4,344
|
34%
|
1.66
|
AO
|
3,108
|
24%
|
1.68
|
CO
|
3,127
|
24%
|
1.78
|
ACO
|
4,804
|
37%
|
1.64
|
Table 4. Examples of sub-words depending on pre-processing
Original text
|
Mìt
|
dr
|
Jugend
|
ìsch
|
nit
|
loos
|
!
|
Subwords
|
M␣ì␣t
|
dr
|
Jugend
|
ì␣sch
|
nit
|
loo␣s
|
!
|
A
|
Mit
|
dr
|
Jugend
|
isch
|
nit
|
loo␣s
|
!
|
C
|
Mit
|
der
|
Jugend
|
ist
|
nicht
|
loo␣s
|
!
|
O
|
M␣ì␣t
|
dr
|
Jugend
|
ì␣sch
|
nada
|
loo␣s
|
!
|
AC
|
Mit
|
der
|
Jugend
|
ist
|
nicht
|
loo␣s
|
!
|
AO
|
Mit
|
dr
|
Jugend
|
isch
|
nada
|
loo␣s
|
!
|
CO
|
Mit
|
der
|
Jugend
|
ist
|
nada
|
loo␣s
|
!
|
ACO
|
Mit
|
der
|
Jugend
|
ist
|
nada
|
loo␣s
|
!
|
- 4 No attempt was made to transform the Alsatian corpus to any language other than German.
33Table 5 details the results, in terms of accuracy, for the 10 source languages that achieve the best results on average for Alsatian and for the different transformations to approximate German.4 For comparison, we also show the results obtained for Swiss German dialects (Aepli & Clematide 2018) with the same source languages (without data transformation): these dialects are very close to Alsatian dialects, especially the Upper Alemannic area south of Alsace, bordering Switzerland.
Table 5. Accuracy (in %) for different preprocessings. The 10 source languages shown are those with the best results on average for Alsatian. Languages present in the XLM-R pre-training data are underlined
Source language
|
Swiss
|
Alsatian
|
A
|
C
|
O
|
AC
|
AO
|
CO
|
ACO
|
Afrikaans
|
55.2
|
53.0
|
61.0
|
68.6
|
55.1
|
71.6
|
62.2
|
69.3
|
72.0
|
German
|
50.2
|
49.9
|
58.7
|
69.5
|
52.5
|
73.0
|
60.4
|
70.6
|
73.6
|
Armenian
|
46.2
|
47.6
|
58.9
|
65.8
|
51.2
|
71.1
|
61.2
|
67.8
|
72.2
|
Western Armenian
|
58.2
|
55.6
|
65.6
|
71.0
|
59.0
|
74.8
|
67.7
|
72.4
|
75.5
|
Bulgarian
|
50.3
|
48.5
|
57.8
|
66.0
|
51.5
|
69.8
|
59.9
|
67.4
|
70.8
|
Faroese
|
54.1
|
50.5
|
59.5
|
67.5
|
52.9
|
71.3
|
61.2
|
68.6
|
72.0
|
Welsh
|
49.9
|
50.4
|
57.6
|
66.9
|
52.4
|
69.5
|
58.8
|
67.8
|
70.0
|
Lithuanian
|
49.7
|
48.8
|
57.9
|
66.4
|
51.4
|
69.5
|
59.4
|
67.8
|
70.5
|
Romanian
|
53.0
|
51.8
|
60.6
|
69.2
|
54.9
|
73.2
|
62.8
|
70.4
|
74.0
|
Czech
|
50.8
|
49.6
|
58.1
|
67.9
|
52.1
|
71.3
|
59.4
|
69.4
|
72.2
|
34These results show that simple transformations have a large impact on the results obtained. Deleting accents alone results in an average gain of 7.2 accuracy points over the raw data for all 65 source languages, more than using a lexicon of open-class words (an average gain of 2.1 points). In fact, the latter resource has the lowest impact on results. The grammatical word lexicon increases the accuracy score by an average of 14.1 points across all source languages, confirming the observations of Bernhard & Ligozat (2013). Finally, the best results are obtained by combining all the resources (ACO): average accuracy of 61.0 (+18.7) for the 65 source languages, and 72.1 (+21.7) for the 10 languages presented in Table 5.
35Comparing the data from Tables 3 and 5, we can see that, overall, as the percentage of transformed words increases, the average number of subwords per word decreases and the accuracy score also increases. The decrease in the average number of subwords per word indicates, indirectly, that the data are closer to those used to pre-train the language model tokenizer and thus less “noisy.”
36These results are lower than, but very close to, the accuracy score of 78% obtained by Blaschke et al. (2023) for the same Alsatian corpus, manipulating the German corpus before training the POS tagging model. In our case, we did not re-train the models and used them as they were.
37What is surprising however is that the source language which yields the best results is Western Armenian, which reaches a higher score than German, even after corpus pre-processing. Western Armenian is not included in the corpus of 100 languages used to pre-train XLM-R (Conneau et al. 2020). This was already observed by de Vries et al. (2022) for Swiss German. German is included and accounts for 10,297M tokens, making it the seventh most represented language after English (55,608M), Vietnamese (24,757M), Russian (23,408M), Indonesian (22,704M), Persian (13,259M) and Romanian (10,354M). It should be noted that Romanian obtains the second best result as a source language for tagging Alsatian. De Vries et al. (2022) had already noticed that Romanian was the best source language for any target language and language family in their experiments. This is confirmed again for Alsatian. The under-performance of German as a source language shows that the model is less able to adapt to noise and that it could be somewhat “over-trained” for German. Languages such as Western Armenian or Faroese, which are not part of the pre-training corpus for XLM-R, could on the contrary produce models which are more flexible and less negatively influenced by noise.
38Dealing with noise in extremely low-resource languages can take several forms. In this article, we have focused on noise due to different scripts, orthographic variants and noise due to the (supposedly lower) transferability of models trained on unrelated languages in zero-shot cross-lingual settings. The results obtained in our experiments confirm previous observations about the decisive role of linguistic proximity and noise reduction techniques. Accuracy before and after fine-tuning on the Buryat UD corpus for Turkish and Uyghur languages is relatively high, which is not an obvious result. In fact, the linguistic discussion on the genetic affinity of Turkic and Mongolic languages still rises controversies and this kind of experiments can certainly contribute to this discussion on how related these two language families are. However, some questions remain as to the interpretation of the capacities of multilingual models. Dagur and Alsatian are representative of different situations in that respect. Dagur and other Mongolic languages are almost not represented in the pre-training corpus of the XLM-R multilingual model, since only Mongolian is included. In this case, training the POS tagging model in a related language such as Buryat is important. On the other hand, several Germanic languages are represented in the XLM-R pre-training corpus: Afrikaans, Danish, German, English, Western Frisian, Dutch, etc. In this case, training on linguistically related languages seems to be more susceptible to noise, in contrast to more distant languages.
39These conclusions should be verified in other languages and for other tasks to see if similar observations can be made in different cases. This could then contribute to a better understanding of the effects of noise stemming from corpora in extremely low-resource languages.