Since Google began to build up the enormous Google books digital library and, especially, when they made the first version of the Google n-grams frequency lists (henceforth “GNs”) based on the Google books collection available in 2009, many researchers have felt attracted by this impressive amount of data. In particular, the collection of English n-grams has been used as a valuable source of information regarding not only the history and evolution of different cultural phenomena (Pechenick et al. 2015), but also various characteristics of natural language, including the quantitative and qualitative characteristics of the lexicon (for example, its size and the growth of the diachrony in Gerlach and Altmann 2013) or the evolution of verbal inflection (cf. Michel et al. 2011). Nonetheless, GNs seem to be surprisingly little used for diachronic analysis of languages other than English, both in inflectional morphology and in word-formation, except for the illustration of competing frequency curves of isolated words through the integrated N-gram viewer tool. The few previous studies conducted on Romance languages have shown that GNs can provide an incomparably larger amount of data than previous research has been able to handle, either in the domain of lexical semantics (e.g., the detection of semantic changes in Italian words proposed in Basile et al. 2016) or word-formation (e.g., the diachronic analysis of Italian deverbal nouns by Radimský and Štichauer, 2021). However, it turns out that an efficient morphological exploitation of GNs requires tedious computerized pre-processing of the raw data, the quantity of which exceeds the operational capacities of desktop computers. This article thus presents the first outcomes of a project which aims to analyze the third (and last) version of Italian and French GNs (2020) and build data sets of electronic dictionaries available in open-access that would facilitate the use of GNs for diachronic research, particularly in lexical morphology, starting from raw data.
- 1 https://books.google.com/ngrams
- 2 https://books.google.com/
In short, Google n-grams (GNs)1 are sets of frequency lists published by Google and based on data from the Google books (GB) repository.2 Although Google’s reticence to disclose information about the coverage of GB is notorious (cf. Fagan 2021), it would be impossible to use GNs in linguistic research without having any idea about the size and representativeness of underlying data. Section 2 will therefore first discuss the coverage of Italian and French Google books collection.
GN data are made accessible by Google in two ways, either through the Google books n-gram viewer application3 or in the form of raw data that contain n-grams (word forms) with their frequencies in the respective time spans. Despite its versatility, the N-gram viewer has two serious limitations: first, it does not allow to search for fragments of word forms, and second, it provides only graphs as outputs – so that, for instance, it is not possible to extract a list of words that (could) bear a specific affix. It is therefore only useful if we want to visualize the diachronic competition between different forms which are previously known. For other uses in lexical morphology, raw data must be used. Raw Italian and French GN data and their basic pre-processing will be introduced in Sections 3 and 4, respectively. Section 5 will then provide a quantitative overview of Italian and French unique types gathered from Google unigrams.
Section 6 will discuss further possibilities of using Romance GNs specifically in lexical morphology and introduce the structure of new datasets based on original unigram, bigram, and trigram GN data.
Compared to synchronic corpora, diachronic corpora always suffer from two major problems consisting, respectively, in a low diversity of genres and in a limited size. The first limitation can hardly be overcome since the more we go back in time, the narrower the variety of genres which have been preserved to this day is. That is why most of diachronic corpora rely entirely or mostly on (published) books including both fiction and non-fiction in variable proportions. In this respect, the GB collection seems perfectly suitable for the diachronic study of lexical morphology. Conversely, the limited size of diachronic corpora is partly due to technical barriers arising from difficulties with gathering and conversion of texts, which could be overcome. To put it more specifically, synchronic corpora of contemporary language have reached the size of hundreds of millions of tokens already in their “third” generation in the 1990s (Leech 1991), especially with the British National Corpus (BNC, 100 million tokens), which is still today considered as a gold standard that ensures a good balance between size and quality concerning both the selection and the treatment of texts. However, even these days, corpora of ancient language are usually one or two orders of magnitude smaller. As far as Italian is concerned, the diachronic Corpus OVI dell’italiano antico that contains texts from the 10th-14th centuries comprises 30 million words, and its lemmatized section called TLIO has 23 million words.4 The diachronic corpus MIDIA that spans as much as eight centuries (sic!) comprises only 7.5 million words divided in five diachronic layers (D’Achille & Grossmann 2017), and a new corpus, CODIT, with a similar architecture and diachronic coverage, comprises 33 million tokens (Micheli 2021) – which represents a figure between 4.5-7.5 million tokens for each diachronic layer. This makes rarer phenomena very hard or even impossible to analyse. In this respect, GB repository contains data that are several orders of magnitude larger.
Although the exact coverage of the GB repository and the criteria of selection of books remain unknown, its size in absolute figures is available, and some approximate information concerning relative figures can be inferred from the sources published so far. The size of GB corpora underlying each GN dataset is provided directly by Google in “total_counts” files.5 Table 1 reproduces the overall token frequencies and the number of volumes (i.e., books) for the 2nd and the 3rd versions of Italian and French data made available by Google in 2012 and 2020, respectively. These figures are impressive not only with regard to the usual size of diachronic corpora: notice that the corpus underlying the last version of French GNs is 3,288 times larger than the BNC corpus.
Table 1. Size of the GB corpus underlying the respective GN datasets
GN dataset
|
Volumes (books)
|
Tokens
|
Italian (2012)
|
305,763
|
40,288,809,936
|
Italian (2020)
|
1,209,932
|
120,410,089,963
|
French (2012)
|
792,118
|
102,174,681,393
|
French (2020)
|
3,165,024
|
328,796,168,553
|
Another strong point of the GN data consists in the granularity of diachronic dating, as the frequency of each n-gram is related to the precise year of publication of the respective source books. GN data may thus be divided into various diachronic layers spanning from the 1470s to these days. The “total_counts” files provide the exact size of the underlying corpus for each year, which makes it possible to convert absolute frequencies into relative ones individually for each n-gram (in fact, only relative frequencies make it possible to compare frequency changes between the different time periods). The size of the different subcorpora from the 2nd and the 3rd versions of French and Italian GNs, expressed in absolute figures grouped by decades, is provided in Appendix I. The column “increase” indicates how many times the size of the 2020 subcorpus for each decade is larger than that of the 2012 subcorpus. As for the absolute size of the 3rd version, for each decade from the 1540s on, the subcorpus comprises more than 2 million tokens, and since the 1800s, the order of magnitude is in thousands million tokens by decade. The comparison between the last two GN versions shows that in terms of updates, for the second half of the 20th century the increase of subcorpus size was relatively small (at most 2.62 times by decade for French, even less for Italian), but as we go back in time, the rate of updates increases, reaching some isolated peaks in the periods of 1560-1569 for French (a 2,063 times increase) or 1580-1589 for Italian (a 364 times increase).
In relative terms, the first version of GNs published in 2009 was based on the version of GBs that contained approximatively 4% of all books ever printed (Michel et al. 2011), while the second version of GNs released in 2012 was already based on data from 6% of all books ever published (Lin et al. 2012). To the best of my knowledge, such relative figures concerning the third version of GNs have not been made publicly available. However, presuming that the 6% rate concerns Italian and French GNs as well, the data from Table 1 and Appendix 1 (i.e., if the figures for the newest period from 2010 on are discounted from the 3rd GN version) show that the GB size underlying the 3rd version increased approx. 2.6 times for Italian and 2.8 times for French with respect to the 2nd version of GNs, so that the 3rd version could be based on data from approximately 16% of French and Italian books ever printed before the year 2010.
- 6 E.g., the volume A new set of exercises consisting of a collection of entertaining histories, anecd (...)
Conversely, a weak point of the GB corpus, like other large corpora compiled automatically, is the accuracy in text selection. Although we can rely with a high degree of confidence on the fact that the GB texts come exclusively from printed books, it is less certain whether these texts were actually written in the language that corresponds to the selected dataset. In Italian unigrams, for example, there are 2,669,959 occurrences of the form are, for which a reverse search in Italian Google books reveals that the source is mostly English texts (albeit with some relation to Italian).6 Similarly, for the 4,538,112 occurrences of the form im, it is likely that the source text is wholly or partially in German. The exact proportion of texts in the misidentified language cannot be determined, it can only be estimated by comparing the number of occurrences of forms with a similar function. For example, the ratio of occurrences of the (predominantly English) verbal forms am (2,526,106 tokens) and are (2,669,959 tokens) to their direct Italian equivalent sono (1.sg. and 3.pl.ind.pres. ‘be’) is 5,196,065 to 537,474,395, which implies that the proportion of English-language texts in Italian GNs could be less than 0.96%. The same comparison between the token frequency of are (4,935,927) and sont (1,499,397,497) in French data yields the rate 0.33%. The proportion of data in the wrong language is therefore not high in GB, but it should be taken into account in analyses. Other known issues of the GB corpus include the quality of text processing (tokenization and OCR (optical character recognition)-related problems discussed in Section 3), unavailability of underlying text data (which can only be displayed by reverse manual search in GB), and to some extent also a limited reliability of dating (only information about the date of publication of the text is available, not the date of writing, which is potentially a problem with reissues of older books).
Until now, Google has published three versions of n-gram datasets from Google books, in July 2009 (Version 1), in July 2012 (Version 2), and in February 2020 (Version 3), respectively. Only n-grams that appear over 40 times across the corpus are included in the data, but given the impressive size of the corpus, it does not seem to pose a problem: rarer word-forms are likely to be proper nouns or forms recognized incorrectly during the OCR procedure.
The format of the 2nd version data is exemplified in (1a) which could be read, according to the explanatory note (1b) published by Google,7 as follows: the word form fondamento, written entirely in lower case and tagged as Noun, appeared 8,961 times in 760 distinct Italian books published in 1950. In the 3rd GN version, the data format has slightly changed to avoid redundant repetition of each n-gram form for each year, as exemplified in (2). Although no explanatory notes are yet available for this 3rd version, it is reasonable to assume that the type of data is the same as in Version 2. Thus, besides the word form (in case-sensitive format), the tag and the absolute frequency of each form each year, we also have basic information on their dispersion expressed as the number of volumes in which the n-gram appeared.8 Conversely, notice that GN data are not lemmatized in any way. In addition to unigrams exemplified in (2), the raw data also contains multigrams (from 2-grams up to 5-grams) whose format is very similar: the respective unigrams within the multigram are separated by a space.
(1a) fondamento_NOUN 1950 8961 760
(1b) unigram TAB year TAB match_count TAB volume_count EOL
(2) Fondamento_NOUN 1535,3,3 1556,14,1 1569,1,1 1576,2,1 [...]
While the information about years and frequencies in GNs is straightforward, a word is in order about some issues concerning the form of n-grams, namely about OCR-related errors, segmentation of input text into n-grams, and tagging.
I will comment on these issues in the following sections specifically with regard to the 3rd version datasets of Italian and French unigrams. The total number of Italian and French unigrams in the raw GN (2020) dataset is given in Table 2. Unigrams whose form contains non-alphabetic characters (other than an apostrophe or the “_” sign that precedes the tag) were filtered out and will not be analysed further.
Table 2. Number of Italian and French unigrams in GN (2020)
|
Italian
|
French
|
Alphabetic unigrams
|
11,077,935
|
89.03%
|
20,172,028
|
85.04%
|
Non alphabetic unigrams
|
1,365,552
|
10.97%
|
3,547,323
|
14.96%
|
Unigrams (total)
|
12,443,487
|
|
23,719,351
|
|
- 9 The correct form has 71,187,461 occurrences, while the form *govemo appears 197,111 times according (...)
- 10 Cf. Radimský & Štichauer (2021).
As Solovyev et al. (2020: 148-149) state, OCR quality was a serious issue especially in the first version of the GNs, while already in the second version (2012), there was a significant improvement. The most significant systematic error is probably the substitution of rn by m, which concerns words, such as Italian governo (‘government’): for this concrete word, the ratio of incorrect forms is 0.28%;9 other substitutions, such as the number 1 for either uppercase I or lowercase l (*1talia, *u1timo) have a negligible token frequency. Some spelling errors are also based on incorrect word-division (e.g., the forms *bare and *bamento are sometimes based on an incorrect analysis of tur-bare and tur-bamento).10 Other frequent problems, such as v~u (?vna for una), f~s (?fua for sua) or j~i (?ajuto for aiuto) substitutions, cannot generally be regarded as mere errors, as they reflect, to some extent, historical typography or spelling, so respecting them might be useful for research.
Regarding the way the tokenization (i.e., the segmentation of the input text into n-grams) has been performed by Google, data analysis shows that the major problem is an unsystematic treatment of selected determiners which are separated from the immediately following word by an apostrophe. In other terms, when words that begin by a vowel are preceded by some definite determiners, such as l’esempio, dell’esempio or all’esempio in Italian, the whole combination might have been segmented either as a bigram or as a unigram in the original data. As Table 3 shows, the segmentation in bigrams prevails with the definite article l’, while the overwhelming majority of occurrences with the forms all’ and dell’, corresponding to a fusion of the preposition a with the definite article, are classified as unigrams. The last row of Table 3 indicates the sum of all potential determiners of this type with the word form esempio: it shows that 19% of occurrences are included in unigrams and the rest in bigrams.
Table 3. Sum of token frequencies of the word esempio (‘example’) preceded by selected determiners in original GN data
Determiner
|
Unigrams
|
Bigrams
|
Total
|
Unigrams (rate)
|
l’
|
410
|
8,822,695
|
8,823,105
|
0.0046%
|
all’
|
329,641
|
5,712
|
335,353
|
98.2967%
|
dell’
|
283,377
|
8,534
|
291,911
|
97.0765%
|
*l’
|
2,347,005
|
9,988,301
|
12,335,306
|
19.0267%
|
Some –but by no means all– n-grams in the GN have a POS tag assigned, stemming from the Google Universal POS tagset (Petrov et al., 2011). The different tags and the corresponding number of unigrams in the Italian and French data are shown in Table 4. The number of unigrams here technically represents the number of lines in the original GN version 3 data (unigrams with anomalous tags and those that contain non-alphabetic symbols other than the apostrophe are ignored).
Table 4. Tags and the number of respective unigrams in Italian and French GNs (2020)
- 11 As expected, the most frequent forms with the ADP (“adpositions”) tag are prepositions, such as di, (...)
Tag
|
Italian
|
French
|
Number of unigrams
|
Rate
|
Number of unigrams
|
Rate
|
ADJ
|
1,003,866
|
9.06%
|
1,561,903
|
7.75%
|
ADP11
|
195,769
|
1.77%
|
59,918
|
0.30%
|
ADV
|
165,406
|
1.49%
|
322,214
|
1.60%
|
CONJ
|
27,357
|
0.25%
|
12,757
|
0.06%
|
DET
|
17,335
|
0.16%
|
191,558
|
0.95%
|
NOUN
|
3,613,274
|
32.62%
|
6,021,471
|
29.88%
|
PRON
|
15,103
|
0.14%
|
69,715
|
0.35%
|
VERB
|
1,048,543
|
9.47%
|
1,469,722
|
7.29%
|
X
|
25,452
|
0.23%
|
1,331,852
|
6.61%
|
NUM
|
20,828
|
0.19%
|
0
|
0.00%
|
PRT
|
0
|
|
58,838
|
0.29%
|
no tag
|
4,942,621
|
44.63%
|
9,052,049
|
44.92%
|
Total
|
11,075,554
|
|
20,151,997
|
|
About 44% of the unigrams in both languages are without a tag. An interesting fact is the substantial difference in the number of unigrams with the X tag, which represents an indeterminate POS (foreign words, typos, abbreviations). Indeed, the four most frequent unigrams with the X tag in Italian are p, ecc, m, and cm that represent genuine Italian abbreviations, while in the French dataset the four most frequent unigrams with the X tag are the English words in, of, the, and and. This seems to be due more to the tagging technique rather than to the different proportion of foreign data in the two datasets: as we saw in Section 2, the proportion of English texts seems to be rather low in the French data. On the other hand, the form the has most often the wrong tag NOUN in the Italian data, while in the French data it has most often the correct tag X. Another difference between the two languages is that the French dataset does not use the NUM tag, which denotes Italian numeric expressions, while the Italian dataset does not have the PRT tag (designed for “particles or other function words”), which in French is presumably to denote various morphemes or words that appear frequently on any position in strings containing a hyphen. A list of the most frequent French forms with the PRT tag is given in (3).
(3) ex, non, vice, soi, semi, vis, anti, quasi, pro, Vice, ultra, co, micro, extra
At the lower positions of the frequency list of forms with the PRT tag, there are also fragments of proper nouns, such as Saint and Sulpice (Saint-Sulpice being a Roman Catholic church in Paris).
The above analysis of the raw data shows that a “unigram” within GNs is technically something rather different from a “word form”. Each word form in GNs is represented by a variety of different unigrams depending on whether it is written in upper/lower case, what tag it has (if any), and whether it is segmented correctly. For instance, the Italian word form esempio (‘example’) is found in 255 (sic!) different unigrams, the most frequent of which, divided by a comma, are listed in (4). This explains, in part, why the total number of unigrams shown in Table 2 is so high.
(4) esempio, esempio_NOUN, Esempio, Esempio_NOUN, sull’esempio, all’esempio, dall’esempio, nell’esempio, sull’esempio_ADV, dell’esempio, coll’esempio, nell’esempio_ADV, all’esempio_ADV, dall’esempio_ADP, bell’esempio, dall’esempio_ADV, […]
A further morphological analysis of n-grams therefore presupposes, first, a compilation of a list of unique types (word forms) with all related properties that can be extracted from the raw GN data. To achieve this goal, raw data was first pre-processed in Python in the way exemplified in Table 5. The tag (Column C) and determiner (Column F), if present, were separated from the original n-gram form (Column A), and the form was converted to lower case. Original information on the use of upper and lower case was encoded into Columns G – H that indicate whether the word form (in B) contains at least one uppercase letter and whether the first letter of the form in B is uppercase (“U”) or lowercase (“L”), respectively. Column I shows whether the form in B contains an apostrophe (if “Y”es, then Column F indicates the segment preceding the apostrophe, i.e., the determiner). Column J reports the frequency sum of the n-gram for all years.
Table 5. Analysis of raw n-gram data
A
(Original
n-gram)
|
B
(A without the tag)
|
C
(Tag)
|
D
(Lowercase B)
|
E
(D without the DET)
|
F
Determiner
|
G
|
H
|
I
|
J
(Frequency sum)
|
esempio
|
esempio
|
NOTAG
|
esempio
|
esempio
|
NONE
|
L
|
L
|
N
|
29,060,943
|
esempio_NOUN
|
esempio
|
NOUN
|
esempio
|
esempio
|
NONE
|
L
|
L
|
N
|
29,059,470
|
Esempio
|
Esempio
|
NOTAG
|
esempio
|
esempio
|
NONE
|
U
|
U
|
N
|
509,596
|
Esempio_NOUN
|
Esempio
|
NOUN
|
esempio
|
esempio
|
NONE
|
U
|
U
|
N
|
509,480
|
sull’esempio
|
sull'esempio
|
NOTAG
|
sull’esempio
|
esempio
|
sull
|
L
|
L
|
Y
|
218,974
|
esemPio_NOUN
|
esemPio
|
NOUN
|
esempio
|
esempio
|
NONE
|
U
|
L
|
N
|
1,083
|
These data were subsequently analysed through a database system (PostgreSQL) and were used, among other things, to generate a list of unique types based on the word forms in Column E. These Italian and French unique types from unigrams will be introduced in Section 5.
As Table 6 shows, Italian and French unigrams contain 3.4 and 6.4 million of unique word forms, respectively, which means that, on average, roughly every third unigram in source GN data contains a unique word form.
Table 6. Number of Italian and French unigrams in GN (2020)
|
Italian
|
French
|
Alphabetic unigrams
|
11,077,935
|
20,172,028
|
Unique word forms
|
3,376,459
(30.48%)
|
6,426,121
(31.86%)
|
By analysing the raw data in the way outlined in Section 4, the absolute and relative token frequencies for each word form can be calculated depending on what tag (if any) is assigned to them and whether they appear with initial lower- or upper-case letters. This information allows us to at least roughly determine how the tagging was performed by Google and to estimate the proportion of non-words and proper names, as such data are not relevant for further morphological analyses. Let us look at these two issues in more detail.
Although Named Entity Recognition techniques using neural networks are nowadays usually applied to identify proper names in texts, it does not seem appropriate to use them in this case. Firstly, we do not have access to texts but only to frequency lists, which are already tokenized in some way, and not all unigrams have corresponding higher order multigrams in the source data (indeed, the minimum threshold frequency of 40 tokens applies to bigram and trigram data, too). Secondly, the data cover a wide diachronic period of about 500 years and at the same time a wide thematic range, so it would be difficult to train a reliable NER model. For these reasons, a very simple but robust technique was used to estimate the proportion of proper nouns, based on an analysis of the frequencies of words with initial uppercase letters.
The distribution of Italian word forms according to their occurrence ratio with the first lowercase letter is reported in Table 7. The distribution is quite sharp: on the one hand, more than half of the word forms appear exclusively or almost exclusively with an initial lower-case letter (rounded to 1.00, their occurrence ratio with first lowercase letter is thus equal to or higher than 0.995) and on the other hand, more than a quarter of the word forms appear exclusively or almost exclusively with an initial upper-case letter (the same occurrence ratio, rounded to 0.00, is lower than 0.005). The remaining 21.66% of the sample is split evenly between these two extremes. The figures for occurrence ratios between 0.06 and 0.93 are so similar that they have been combined into three groups in the table, with the range between the maximum and minimum values indicated.
Table 7. Absolute and relative number of Italian word forms according to their occurrence ratio with the first lowercase letter
Occurrence ratio of word forms with first lowercase letter
rounded whole percentages
|
Number of word forms
|
Rate of word forms
|
0.00
|
868,251
|
25.71%
|
0.01
|
9,301
|
0.28%
|
0.02
|
6,203
|
0.18%
|
0.03
|
5,422
|
0.16%
|
0.04
|
4,926
|
0.15%
|
0.05
|
4,671
|
0.14%
|
0.06 - 0.43
|
3,859 - 4,495
|
0.11% - 0.13%
|
0.44 - 0.7
|
4,587 - 6,496
|
0.14% - 0.19%
|
0.71 - 0.93
|
6,766 - 16,988
|
0.20% - 0.50%
|
0.94
|
18,800
|
0.56%
|
0.95
|
21,275
|
0.63%
|
0.96
|
24,205
|
0.72%
|
0.97
|
28,170
|
0.83%
|
0.98
|
33,779
|
1.00%
|
0.99
|
39,241
|
1.16%
|
1.00
|
1,777,084
|
52.63%
|
Total
|
3,376,459
|
100%
|
Table 8 shows that the distribution of the French data has the same shape but differs in figures at the extremes: especially the rate of forms that appear (almost) exclusively with the first upper-case letter is sensibly higher (33.7% of types). This could be in part due to the French dataset being almost twice as large as the Italian one. Indeed, for such large corpora, it is to be expected that the number of proper nouns will be increasing faster as the size of the corpus increases. Another reason could be other typographical conventions in the original texts, but further qualitative research is needed to test this hypothesis.
Table 8. Absolute and relative number of French word forms according to the occurrence ratio with the first lowercase letter
Occurrence ratio of word forms with first lowercase letter
rounded whole percentages
|
Number of word forms
|
Rate of word forms
|
0
|
2,165,299
|
33.70%
|
0.01
|
18,771
|
0.29%
|
0.02 - 0.96
|
7,773 - 26,292
|
0.12% - 0.41%
|
0.97
|
29,100
|
0.45%
|
0.98
|
33,470
|
0.52%
|
0.99
|
41,988
|
0.65%
|
1
|
3,137,215
|
48,82%
|
Total
|
6,426,121
|
|
To sum up, proper nouns are likely to represent roughly 26% and 34% of Italian and French word forms, respectively. If proper nouns are not relevant to the research, the data size can be effectively reduced by removing word forms that appear (almost) exclusively in GN with the first upper-case letter, i.e., whose occurrence ratio with first lowercase letter is lower than 0.005 (0.5%). Although it is also possible to remove word forms which have a slightly higher occurrence ratio with first lowercase letter, this does not result in a noticeable reduction of the sample and, on the contrary, we risk unintentionally filtering out common nouns.
As far as POS tagging is concerned, it seems that a large part of the GB corpus was not tagged at all. Not only do more than 44% of n-grams have no tag (cf. Table 4), but the analysis of unigrams shows that no word form has an occurrence ratio greater than 0.5 with any of the tags. In order to estimate the proportion of potential non-words among unigrams, it is essential to take a closer look at forms with the “X” tag and at forms without any tag.
The X tag does not seem to be relevant for identifying non-words within the Italian dataset since 99.4% of word forms never appear with this tag. In the French dataset, however, only 83.68% of word forms never have the X tag, and the number of word forms having a proportion of occurrences with an X tag greater than 0.01 is not negligible (903,788). A qualitative look at the data reveals that a reasonable threshold for identifying non-words might be the proportion of occurrences with an X tag greater than 0.1 (10%): in this way, a total of 649,244 word forms, which are overwhelmingly foreign words, can be filtered out from the French data.
The distribution of Italian and French word forms according to their occurrence ratio without any tag is very uneven, as shown in Table 9. As indicated in the first row, almost all forms in GNs appear without a tag in at least 50% of occurrences (i.e., extremely few forms have the occurrence ratio with “no tag” lower than 0.5), which again indicates that about half of the data were not tagged. Most forms have an occurrence ratio without tag situated between 0.5 and 0.6 – this applies to 85.65% of Italian word forms and to 79.62% of French word forms. The second consistent group are forms with a tag-free occurrence ratio equal to 1.00 (the value is not rounded) in the last row. Even a qualitative look at the data shows that especially these are apt candidates for filtering out as non-words.
Table 9. Absolute and relative number of Italian and French word forms according to the occurrence ratio with “no tag”
Occurrence ratio with no tag
(not rounded)
|
Italian
|
French
|
Number of unigrams
|
Rate of unigrams
|
Number of unigrams
|
Rate of unigrams
|
< 0.5
|
21
|
0.00%
|
264
|
0.00%
|
= 0.5
|
660,026
|
19.55%
|
85.65%
|
631,851
|
9.83%
|
79.62%
|
> 0.5 and < 0.6
|
2,231,689
|
66.10%
|
4,484,970
|
69.79%
|
> 0.6 and < 1
|
141,835
|
4.20%
|
403,511
|
6.28%
|
= 1
|
342,888
|
10.16%
|
905,525
|
14.09%
|
Total
|
3,376,459
|
|
6,426,121
|
|
An interesting question is how the word forms with a tag-free occurrence ratio equal to 1.00 are distributed diachronically. We might expect them to have a higher proportion in older data, presuming that newer data could have been tagged in a more exhaustive way. However, this is not the case. In the Italian data, the relative type frequencies of these potential non-words by decades, calculated as the number of potential non-words (types) divided by the total number of types in the respective decade from the 1500s on, vary from 1.7% (1510-1519) to 9% (1850-1859). Although the maximum values are concentrated specifically in the 19th century, values over 8% concern also very recent data (decades 1990-1999 and 2000-2009). The French data offer a similar picture with relative type frequencies of potential non-words spanning from 1% to 13% with most peaks concentrated in the 19th century, and oscillations within the range 9-11.4% from the 1960s on.
Thus, data analysis shows that an effective filtering of non-words can be achieved within unigrams by removing word forms that satisfy at least one of the following properties: a) their occurrence ratio with first lowercase letter is lower than 0.05; b) their occurrence ratio without tags is equal to 1.00; c) in the French dataset, their occurrence ratio with the X tag is greater than 0.1. The simultaneous application of these filters to the databases of word forms yields 2,185,812 Italian and 2,951,444 French unique word forms. Compared to the data in Table 6, this means that 35.2% of the Italian and 54% of the French unique forms are likely to represent non-words.
- 12 One possible way is to link GN data with derivational networks, such as Démonette (Hathout & Namer (...)
If we now change our perspective and ask what kind of items we want to look for in GN data, it inevitably turns out that the identification of unique word forms and potential non-words among them, as introduced in Section 5, is only the first step. On the one hand, word forms from unigrams need to be further analysed to identify morphologically complex words and to determine their internal structure. This procedure is better done separately for each word-formation pattern and, although it can be partially automated, it still requires a lot of manual cleaning, as Radimský & Štichauer (2021) show. Thus, we leave this issue for further research.12 On the other hand, the data obtained from unigrams is not sufficient, either because many morphologically complex words are spelled differently, with hyphenated or separate components, or because the immediate context of the unigram needs to be known for disambiguation. We will briefly focus on these two problems here in order to propose appropriate processing of bigram and trigram data.
- 13 However, exceptions confirm the rule: for example, Italian Verb-Noun compounds, such as asciugacape (...)
- 14 Leaving thus aside more complex phrasal lexemes (such as Fr. hors-la-loiN, ‘outlaw’ or avion à réac (...)
As far as the spelling of complex words in Romance is concerned, a hyphen may appear in some derived words (as in French ex-président, ‘ex-president’), but components separated by a hyphen or by a space are mainly found in diverse types of Romance compounds.13 If we focus, in compliance with Fradin (2009), only on true compounds which are made up of just two words,14 we obviously notice some spelling instability. Still today, some compounds are frequent in both spelling forms (see French examples of Noun-Noun compounds (5-6)), and for many of them the spelling has only stabilized over time, so the data should allow both spelling variants to be examined separately but also offer an aggregate spelling-independent view.
(5a) mot-clé – ‘keyword’
(5b) mot clé – ‘keyword’
(6a) code-barre(s) – ‘bar code’
(6b) code barre(s) – ‘bar code’
- 15 Original bigrams and trigrams contain an excessive amount of undesired data that had to be weeded o (...)
However, this requirement is not consistent with the way the original data look like. Compounds made up of two words written separately (5b, 6b) are located in the original dataset of bigrams while, as revealed by the analysis of one of the most frequent Italian hyphenated compounds decreto-legge (‘decree-law’), hyphenated compounds made up of two words (5a, 6a) are located in the original dataset of trigrams, where the middle n-gram corresponds either to a hyphen (‘-’) or to a sequence of two hyphens (‘--’). A unified treatment thus requires processing the GN data from bigrams and trigrams into one common dataset which would contain a list of unique of word-form couples and where, in addition to the total frequency, the frequency corresponding to the hyphenated spelling would be indicated separately. In this way, we prepared Italian and French datasets labelled as “BI”.15
As for the disambiguation issue, in some cases the immediate context of the unigram needs to be known. In Romance languages, it is mainly a matter of distinguishing nouns from verbs or adjectives, that can help in disambiguating, for instance, zero-derived nouns (It. noleggiare.V ‘rent’ > noleggio.N ‘rental’) from verbs (It. noleggio.V.1.sg.ind.pres. – ‘I rent’) or nouns from adjectives with the homonymous suffix (It. pulsante.A, ‘pulsating, beating’ vs. pulsante.N, ‘button’). For this reason, we created for each language a specific datasets of word forms labelled "UNIDET" based on original bigrams, in which the resulting word form (originally located on the second position in source bigrams) was preceded by an unambiguous determinant located originally on the first position in bigrams. Since neither verbs nor derived adjectives can appear in the position immediately following the determiner, this dataset should include preferentially complex nouns.
The overall structure of these new datasets (with labels corresponding to the Italian data) is outlined in Table 10.
Table 10. The structure of new n-gram datasets
- 16 In the cases when Italian sequences PREP/DET+N, such as dell’esempio, have been treated as unigrams (...)
New datasets
|
Source of data in original GNs
|
|
Unique types
|
Diachronic data
|
UNI
|
it2020_uni
it2020_uni_filtered
|
it2020_uni_dia
it2020_uni_dia10
|
1-grams
|
UNIDET
|
it2020_unidet
|
it2020_unidet_dia
it2020_unidet_dia_10
|
2-grams
(1-grams16)
|
BI
|
it2020_bi
|
it2020_bi_dia
it2020_unidet_dia10
|
2-grams (filtered)
3-grams (with the hyphen in the middle position)
|
The three different datasets contain word forms (uni and uni_filtered), word forms that were preceded by a determiner (unidet), and combinations of two word-forms that were separated by a space or a hyphen in the original data (bi), respectively. Each dataset of unique types contains information about the total token frequency, the bi dataset also gives frequency of the hyphenated form, and the uni dataset provides specific frequencies related to the use of uppercase letters and the tags assigned to the original unigrams, as discussed in Section 5.
The crucial point, however, is that for each dataset of unique types, two appropriate diachronic datasets were extracted using the same filtering procedures as the one that has been applied to the corresponding dataset of unique types. These diachronic datasets indicate frequencies of each unique type (i.e., of each word-form in UNI, each word-form preceded by a determiner in UNIDET, each couple of word-forms in BI) by years and by decades, respectively. The datasets are being released progressively in the public repository https://osf.io/46qcd/ (Radimský, 2021) and will be gradually supplemented with new data based on further morphological analyses.
The analysis of tens of millions of n-grams showed that the 2020 version of Italian and French GNs contain 3.3 and 6.4 million of unique (alphabetical) word forms, respectively, a significant part of which are likely to be non-words and proper nouns. Nevertheless, this still represents a large amount of data with diachronic frequency information (expressed as the corresponding publication year of source books) over a period of about 500 years, which may be of primary interest for research in Romance word-formation.
The source GN data from unigrams, bigrams, and trigrams have been preprocessed and made accessible to facilitate further use for research in lexical morphology, which may concern either complex words written as a single word form, or complex words with hyphenated or separate components. Moreover, the preprocessing also makes it possible to easily disambiguate diachronic frequencies of complex nouns from those of other parts of speech. The datasets, released progressively in a public repository (Radimský 2021), contain both lists of unique types and the respective couples of diachronic datasets with frequencies by years and by decades. They will be gradually supplemented with new data based on further filtering using morphological analyses. Further analyses could also make it possible to use this data to enrich existing resources of derivational networks, such as Démonette (Hathout & Namer 2014), adding both new word forms and diachronic frequency information.
However, it should be borne in mind that such a large amount of data will necessarily contain a certain percentage of errors, which will only become apparent in further analyses. It can be assumed that the older the data are, the higher the percentage of errors will be.