Acknowledgments. I am grateful to the anonymous referees for their comments which induced me to extend the initially concise presentation of the material. I appreciate Christopher Green's and Dafydd Gibbon's help with language editing of the paper.
- 1 This material was intended for ALDP 2017 8th African Languages in the Disciplines and Professions' (...)
1The Nko alphabet was created in 1949 by a Guinean scholar and enlightener Solomana Kantɛ (1922–1987) as a script for Manding languages. It is the most successful new indigenous African writing system currently in use (Oyler 2005; Vydrine 2001). The script is mainly utilized for the Maninka language of Guinea.
2Preliminary steps in compiling a corpus of Maninka, as well as the description of the Maninka Reference Corpus (Corpus Maninka de Référence, CMR), where the vast majority (over 85 percent) of texts are written in Nko, have been described in a series of papers (Davydov 2010; Vydrin 2013; Vydrin 2014; Vydrin, Rovenchak & Maslinsky 2016).
3The present report briefly summarizes the experience with the Nko corpus building and outlines prospects towards further development in the future.
4We started collecting texts for the corpus of Nko in 2009. A number of Nko texts in electronic form are available from web sites (mainly http://www.kanjamadi.com) and social networks. The latter comprise a modest fraction of all the texts in the corpus, but such texts (Twitter and Facebook posts) are rather specific and, consequently, they have been the focus of recent studies (see Mikros & Perifanos 2015 and references therein). Presently, more than 1,000 tweets in Nko can be collected. The authors tweeting mostly in Nko include @solofarabado (about 500 tweets), @kiniebaka (over 300 tweets), and @siarabohlansine (over 100 tweets). Other online media, for instance, the NkoAfrica blog (http://nkoafrica.over-blog.com) launched in 2017, represent a good source for potential corpus expansion. Several pages from the Wikipedia incubator in Nko (https://incubator.wikimedia.org/wiki/ Special:PrefixIndex/Wp/nqo/) might be also included. A large part of Nko texts in electronic form (in DOC and PDF formats) were obtained from the Nko Academy (Ńkó Dúnbu, http://nkodoumbou.com) via Ibrahima Sory 2 Condé. All the files were converted to a plain‑text format in the standard UTF‑8 encoding using software specially developed by the author.
5As of April 14, 2018 (the day marking the 69th anniversary of Nko), the corpus size is 3,122,178 words (Vydrin et al. 2014). Over 20 percent of the corpus (ca. 650,000 running words) is covered by texts from periodicals,2 in particular, Dàlu Kέndε and Yèreya fɔ̀ɔbɛ (cf. Rovenchak 2015a). Religious texts (mostly Qur’an and Tafsīr) count over 500,000 words. The remaining part of the corpus is composed of educational literature, texts of popular nature, works of fiction, etc. The corpus can be freely accessed online at http://cormand.huma-num.fr/cormani/.
6The conversion of texts from PDF requires several steps. First of all, the text is copied from PDF into a text file in a proper order, which is particularly important for multicolumn periodicals, and break points are inserted for subsequent splitting of longer texts into individual articles or sections. This last step justifies the manual processing of PDFs: note that article titles are often given as WordArt objects or similar images and not as ordinary text and thus require re‑typing. At present, we have processed over 100 issues of periodicals which constitute about one third of the available files. The obtained text files are then converted into UTF‑8 format. Various encoding schemes are attested when processing original PDFs; there are currently approximately ten approaches used. Occasionally, two different encodings appear in one PDF for different text blocks.
7Some of the most typical encoding schemes are summarized below. For internal purposes, they are denoted by abbreviations (NN, NN1, NN3, etc.). The scheme NN3 is attested, in particular, for Dàlu Kέndε from №15 (15 August 2011) to №25 (31 October 2011). Figure 1 shows text copied from PDF (left image) as compared to the converted one (right image).
Figure 1: Screenshots of texts from Dàlu Kέndε from №16 (22 August 2011).
Note technical headers within tags <h>…</h> used to make automatic splitting of entire issues into separate articles. The left panel is the original copied text while the right is the text in a Unicode font.
8A screenshot of a fragment of the Perl script used to make the conversion to Unicode is given in Fig. 2.
Figure 2: A fragment of the Perl script for the conversion to the Unicode format from the NN3 encoding.
9Another encoding scheme (NN) is demonstrated in Fig. 3. Note that system fonts do not contain codepoints of the text copied from PDF, so we can see just empty boxes. A conversion script fragment is shown in Fig. 4. This scheme is attested in particular for Dàlu Kέndε from №96 (20 May 2013) to №99 (10 June 2013).
Figure 3: Screenshots of texts from Dàlu Kέndε from №96 (20 May 2013).
The left panel is the original copied text while the right is the text in a Unicode font.
Figure 4: A fragment of the Perl script for the conversion to Unicode format from NN encoding.
The left panel corresponds to the UTF‑8 mode while the right is the ANSI mode of text representation. Note in the top row an unusual encoding of the space character.
10While processing the files, several typical misprints were discovered. They often do not cause any problems in reading and understanding the printed version, but they hinder straightforward automatic processing of texts.
11For instance, the dágbasinna symbol < ߑ > (U+07D1) cancelling the gbàrali (contraction) orthographic rule is often replaced by the low tone apostrophe < ߵ > (U+07F5), as in the following example (the rightmost word in the middle line, ʒándarmun):
[…]
|
nà
|
dá.
|
kɛ̀lɛ‑mɔ́ɔ`
|
(ʒándarmun)
|
lú
|
lè
|
fɔ́lɔ
|
[…]
|
come
|
PFV.INTR
|
war‑human‑ART
|
(FRGN)
|
PL
|
FOC
|
first
|
- 3 I am grateful to Valentin Vydrin for assistance with this example glossing. PFV.INTR corresponds to (...)
nà
|
dá.
|
fárin
|
kúlod
|
píivi
|
[…]
|
|
|
come
|
PFV.INTR
|
vigorous
|
Claude
|
Pivi
|
[…]3
|
|
|
12Note that the corpus is glossed using the Daba software (Maslinsky 2014) based on the Malidaba dictionary (Vydrin, Rovenchak & Maslinsky 2016). The preparation of a cleaned, disambiguated, and thoroughly checked Nko corpus is a subsequent stage, which would require resolving some of the issues mentioned below.
13Additional spacing is not rare in combination with tonal apostrophes and tonal marks. Moreover, in many cases, the placement of tonal marks (before or after a vowel, as they are stored in the PDFs) varies in the same file. A fully automatic standardization of such situations is not possible due to appearance of foreign sounds that use diacritical marks, cf. <ߊߓߑߘߎߟߊ߯ߤ߭ߌ߬> ábdulaaħì ‘Abdulaaħi’ or <ߛߊߙߑߞߏߖ߭ߌ߫> sárkozi ‘Sarkozy’. In some cases, it is not immediately obvious whether the mark belongs to the consonant or to the neighboring vowel, cf. <ߌߙߊߞߌ߫> íraki ‘Iraq’, where the overline denotes the high tone of the final syllable, versus <ߌߙߊߞ߫ߌ> íraqí` ‘Iraq’, where the overline changes < ߞ > /k/ to /q/. These problems are often observed in periodicals, where foreign names occur rather frequently. Manual correction seems to be the best option so far. A Nko spellchecker, which would facilitate handling of the abovementioned misprints, is under development by Jean Jacques Méric (cf. Méric 2014 describing the Bamana spellchecker).
14Another curious and unexpected typographic solution is the “nasalized” Arabic comma (dot below < ، >, U+060C followed by U+07F2), occurring instead of the Arabic semicolon < ؛ > (U+061B). Such situations are easily handled automatically.
15In February 2016, an online converter between various Nko encodings, previously determined while processing DOC files for the corpus, was launched at http://nkoconvert.ho.ua. Initially, it was intended as a tool for conversion between older (pre‑Unicode) fonts and Unicode format. Two different mappings of Nko letters on Arabic letters were used in those fonts, one of which is represented by the following typefaces: BOURAMA‑KANTE, Nko Manding1 Cote D'Ivoiredf1, N'ko2 Manden‑1, Solomana Kante, while the second is found in A Manding BATEKA, A Manding IT BAMA, Fofona, Karifala Berete, Manding N'ko Sigui, Nko Africa, Nko Kouroukan Fouwa, and some others (Vydrin, Rovenchak & Maslinsky 2016). Mapping on another right‑to‑left script, Hebrew, is used in the NkoHeb typeface, which is not frequently attested in available materials.
16Other functions of this converter include the transformation between the Nko script and the Roman‑based orthography (with full notation of tones) and the transformation from the Roman based orthography (without tones) to Nko. There are certain limitations in the latter conversion. Since the original Roman‑based texts have no tonal notation, the Nko version of the respective text cannot be tonalized. Short vowels have no tonal marks, i.e., they are represented as if they would carry a high tone. Taking into account frequency data about tonal patterns in Nko texts (Rovenchak 2011; Rovenchak 2015b), one can estimate that about 70% of vowels would be represented correctly. A similar decision was made to mark a long vowel by the diacritic mark corresponding to “long vowel + high tone”. The only exception made is for the long vowel /ɔɔ/, which appears in Maninka much more often with low tone. Since there is no tonal distinction, the gbàrali (orthographic vowel contraction) rule is not applied in the Nko version of texts converted from the Roman‑based orthography. In the future, more advanced techniques might be applied similar to those used by Liu & Nouvel (2017) for Bamana texts.
17A number of open issues remain in order to facilitate further acquisition of texts for the Nko corpus. They are of different natures and importance, and only a couple of them are outlined below.
18Some minor issues include the possibility of encoding additional characters in the Nko Unicode block (Everson 2015):
- 4 In the Unicode Version 11.00 released on 05 June 2018 the following codepoints are assigned: U+07FD (...)
19Of the four proposed symbols, the tɛ́‑kɛrɛndɛ mark is most frequently needed. Its role is similar to a hyphen in some compound words; presently, several different approaches are used to represent this symbol. The combining dántayalan mark denoting abbreviations of units of measure also occurs in the corpus several times. The remaining two signs, which are currency symbols, have not been attested in the corpus so far. The Unicode Technical Committee suggested that the tɛ́‑kɛrɛndɛ mark be represented by a hyphen (Anderson et al. 2016). A further, detailed study of Nko texts is required to determine whether there is a crucial difference between tɛ́‑kɛrɛndɛ and hyphen. So far, this mark can be represented as an underscore < _ > (U+005F). The other three characters (dántalayan, dɔ́rɔmɛ, and táman) are expected to appear in the Unicode Version 11.0 scheduled4 for release in June 2018.
20The currently available, though incomplete conversion from the Roman‑based to the Nko orthography requires further tuning to reflect tonal marking in Nko texts. Such an algorithm will heavily rely on a dictionary, which is under development (Vydrin, Rovenchak & Maslinsky 2016).
21In PDF documents, especially those produced using modern Unicode fonts, problems with text block placement occur frequently. Namely, even a single text line can appear as a set of non‑consecutive blocks. While it might be unexpected, the problem could originate in PDF algorithms for representing right‑to‑left scripts. Therefore, to facilitate further collection of texts, it would be practical if sources for published texts (in DOC, DOCX, ODT, or TXT formats) were made available by authors or publishers, rather than solely the respective final PDFs. Obviously, no text will be put online in its entirety but will be used solely for processing towards the inclusion into the corpus.
22In the future, it would be useful to implement optical character recognition (OCR) for Nko as such software is not currently available. One possibility is to train an open‑source OCR engine known as Tesseract (https://github.com/tesseract‑ocr) for the Nko script. OCR software would allow processing of both printed materials not available electronically and of PDF documents lacking text layers for current conversion algorithms.
23I expect the presented information to generate feedback from the Nko community and hope that it plants the seeds for further collaborations in collecting texts for the Nko corpus as well as in improving existing tools for processing Nko texts. Separate attention should be paid to texts available in several languages as being a source for multilingual parallel corpora; these will be invaluable in the development of automatic translation services in the future.