Skip to navigation – Site map

HomeIssues8-1Probing Linguistic Knowledge in I...

Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties

Alessio Miaschi, Gabriele Sarti, Dominique Brunato, Felice Dell’Orletta and Giulia Venturi

Abstract

In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the transformer models currently available for the Italian language. In particular, we investigate how the complexity of two different architectures of probing models affects the performance of the Transformers in encoding a wide spectrum of linguistic features. Moreover, we explore how this implicit knowledge varies according to different textual genres and language varieties.

Top of page

Full text

1. Introduction and Motivation

1In the last few years, the study of Neural Language Models (NLMs) and their representations has become a key research area in the Natural Language Processing (NLP) community. Several methods have been devised to obtain meaningful explanations regarding how these models are able to capture syntax- and semantic-sensitive phenomena (Belinkov and Glass 2019). Among them, the probing task approach has emerged as the most commonly adopted diagnostic strategy to estimate the mutual information shared by a neural network’s parameters and some latent property that the model could have learned to encode in the training procedure. During probing experiments, a supervised model (probe) is trained to predict the latent information from the network’s learned representations. If the probe does well, we may conclude that the network effectively encodes some knowledge related to the selected property. Formally speaking, let \(f: x_i \rightarrow y_i\) be a neural network model mapping a corpus of input sentences \(X = (x_1, \dots, x_n)\) to a set of target labels \(Y = (y_1, \dots, y_n)\) for a learned downstream task. Assume that each sentence \(x_i\) is also labeled with some linguistic annotations \(z_i\), reflecting the underlying properties we aim to detect. Let also \(h_l(x_i)\) be the network’s output at the l-th layer given the sentence \(x_i\) as input. To estimate the quality of representations \(h_l\) with respect to property z, a supervised model \(g: h_l(x_i) \rightarrow z_i\) mapping representations to property values is trained. We take such model’s performances as a proxy of \(H(h_l(x),z)\). In information theoretic terms, the probe is trained to minimize entropy \(H(z|h_l(x))\), and by doing that it maximizes mutual information between the two quantities.

2(Alain and Bengio 2017) were among the first to use linear probing classifiers as tools to evaluate the presence of task-specific information inside neural networks’ layers. The approach was later extended to the field of NLP by (Conneau et al. 2018) and (Zhang and Bowman 2018) inter alia, which evaluated the presence of semantic and syntactic information inside sentence embeddings generated by LSTM encoders (Hochreiter and Schmidhuber 1997) pretrained on different objectives using probing task suites.

3Nowadays, several studies adopt the probing task approach to investigate the inner working of state-of-the-art Neural Language Models (NLMs). This approach demonstrated that NLMs representations encode linguistic knowledge in a hierarchical manner (Belinkov et al. 2017; Blevins, Levy, and Zettlemoyer 2018; Tenney et al. 2019), and can even support the extraction of dependency parse trees (Hewitt and Manning 2019). (Jawahar, Sagot, and Seddah 2019) investigated the representations learned by BERT (Devlin et al. 2019), one of the most prominent NLM, across its layers, showing that lower ones are usually better for capturing surface features, while embeddings from higher layers are better for syntactic and semantic properties. Using a suite of probing tasks, (Tenney, Das, and Pavlick 2019) deeply explore this behavior showing that the linguistic knowledge encoded by BERT through its 12/24 layers follows the traditional NLP pipeline.

4While the vast majority of this research is focused on English contextual representations, relatively little work has been done to understand the inner working of non-English models. The study by (Vries, Cranenburgh, and Nissim 2020) represents an exception in this context: the authors applied the probing task approach to compare the linguistic competence encoded by a Dutch BERT-based model and multilingual BERT (mBERT), showing that earlier layers of mBERT are consistently more informative that earlier layers of the monolingual model. (Guarasci et al. 2021) applied instead the structural probe originally defined by (Hewitt and Manning 2019) on the representations of a pre-trained Italian BERT. Testing their approach on different subsets of the Italian Universal Dependency Treebank (IUDT), they showed on the one hand that the model is able to encode properties of syntax especially in its central-upper layers; on the other hand, that such embedded syntactic information can be used to successfully perform two specific syntactic tasks, i.e. prediction of Subject-Verb agreement and parsing of null-subject sentences. In (Guarasci et al. 2022), the authors exploited the same methodology to investigate the ability of multilingual BERT to transfer syntactic knowledge across the English, French and Italian languages.

5Another less investigated issue in the previous studies has to do with the design of probing models themselves. Although many studies have focused on multiple transformer models and diagnostic tasks to probe their inner linguistic competence, few works tested different probing architectures and investigated more in-depth their actual effectiveness. Among this few works, (Hewitt and Liang 2019) were the first who observed that probing tasks might conceal the information about the NLM representation behind the ability of the probe to learn surface patterns in the data. To test this idea, they introduced control tasks, a set of tasks that associate word types with random outputs that can be solved by simply learning regularities. In addition, (Pimentel et al. 2020) showed that more complex probes, in contrast with simple linear models, could produce tighter estimates for the actual underlying information.

6Starting from these premises, this paper introduces an approach to NLMs interpretation aimed at carrying out an in-depth investigation of the linguistic knowledge implicitly encoded by 6 Italian monolingual models and multilingual BERT. Besides the focus on Italian, which represents a scarcely considered language in the scenario of the NLM interpretation studies, a further novelty of our approach concerns the broad set of probing tasks we took into account, each corresponding to a specific property of sentence structure. In addition, the present study is one of the few that introduces a still rather under-investigated research issue, i.e. the comparative analysis of how and to which extent the different architectures on which the probing model rely on influence the probing accuracy. To address this point, for each Transformer, we perform the same suite of probing tasks using both a LinearSVR and a multilayer perceptron (MLP), and compare how each probing task’s resolution is differently affected by the two architectures. Since all experiments were carried out on different sections of the Italian Universal Dependency Treebank (Zeman et al. 2019) considered as representative of different textual genres and language varieties, we are also able to investigate how linguistic knowledge of NLMs varies according to standard and less or non-standard varieties of the Italian language.

7The present article is based on, and extends, the work reported in (Miaschi, Sarti, et al. 2020).

1.1 Contributions

8To the best of our knowledge, this is the first study aimed at comparing the linguistic knowledge encoded in the representations of multiple non-English pre-trained transformer models. In particular:

  • we compare the probing performances of 7 pre-trained Italian NLMs spanning three models architectures over multiple linguistic features;

  • we investigate how the complexity of the probing classifier impacts its ability to capture the information encoded in learned representations;

  • we highlight how the implicit knowledge encoded by NLMs during the training process differs across textual genres and language varieties.

2. Approach

9To inspect the inner knowledge of language encoded by the Italian Transformers, we relied on a suite of 82 probing tasks, each of which consists in predicting the value of a given feature modeling a specific linguistic property of the sentence. We tested two different probing architectures: a LinearSVR and a three-layer feedforward network with ReLU activations (Multi-layer perceptron, MLP). If the linear architecture is the most commonly used approach to infer information inside NLMs, the MLP was selected to investigate the presence of nonlinear relations in representations, which could hamper the probing performance of the LinearSVR probe. Regardless of the architecture, the two probing models take as input layer-wise sentence-level representations extracted from the Italian models. These representations are produced for each sentence of different sections of the Italian Universal Dependency Treebank (IUDT), version 2.5 (Zeman et al. 2019), and used to predict the actual value of each probing feature. Starting from the results obtained we performed three complementary investigations. In the first one we compared the results obtained by the two probing architectures according to different groups of probing tasks (Section 3.1). Then, we move to compare the linguistic competence of the 7 Italian Transformers (Section 3.2). Finally, the impact of the considered linguistic varieties on the linguistic generalization abilities of the NLMs is discussed in Section 3.3.

2.1 Models and Data

10We relied on 7 pre-trained Italian models based on three different Transformer architectures: BERT (Devlin et al. 2019), RoBERTa (Y. Liu et al. 2019) and GPT-2 (Radford et al. 2019). In particular, we investigated the linguistic competence of: three BERT-based models, i.e. Multilingual-BERT, BERT-base-italian1 and AlBERTo (Polignano et al. 2019), trained respectively on Wikipedia (102 languages), Italian Wikipedia + texts from the OPUS corpus (Tiedemann and Nygaard 2004) and TWITA (Basile, Lai, and Sanguinetti 2018); three RoBERTa-based models, i.e. GilBERTo2 and two versions of UmBERTo3, trained respectively on OSCAR (Ortiz Suárez, Sagot, and Romary 2019) (GilBERTo and UmBERTo-Commoncrawl) and Italian Wikipedia; a GPT-2 based model, GePpeTto (De Mattei et al. 2020), trained on Italian Wikipedia + ItWAC (Baroni et al. 2009). Models statistics are reported in Table 1. Sentence level representations were computed performing a Mean-pooling operation over the word embeddings provided by the models across their layers.

Table 1: NLMs used in the experiments

Name

Training data

BERT Architecture

Multilingual-BERT

Wikipedia

BERT-base-italian

Wikipedia + OPUS (13GB)

AlBERTo

TWITA (191GB)

RoBERTa Architecture

GilBERTo

OSCAR (71GB)

UmBERTo-Commoncrawl

OSCAR (69GB)

UmBERTo-Wikipedia

Wikipedia (7GB)

GPT-2 Architecture

GePpeTto

Wikipedia + ItWAC (14GB)

11NLM’s linguistic competences are probed against 5 sections of the Italian Universal Dependency Treebank (IUDT) representative of different language varieties and textual genres, as shown in Table 2. The considered sections can be categorised in two main groups: a first one that includes sentences acquired from documents of diverse nature, ranging from Wikipedia pages, to newspaper articles, novels, speech transcriptions, etc., and a second group collecting examples of the social media language, in particular of Twitter. In the first group we included the Italian version of the multilingual Turin University Parallel Treebank (ParTUT) (Sanguinetti and Bosco 2015), the Venice Italian Treebank (VIT) (Delmonte, Bristot, and Tonelli 2007) and Italian Stanford Dependency Treebank (ISDT) (Bosco, Montemagni, and Simi 2013), which we considered representative of the standard Italian language. The group of treebanks composed of PoSTWITA (Sanguinetti et al. 2018) and TWITTIRÒ (Cignarella, Bosco, and Rosso 2019) was originally built to enhance the performances of systems in processing social media texts, and in particular, for irony detection purposes. Being representative of a non-standard variety of the Italian language, for our specific scopes, they are intended to be a quite challenging testbed for probing the linguistic knowledge of NLMs also when they are trained on standard language variety.

12Note that the linguistic abilities of the 7 NLMs were also tested against a number of sub-portions of the largest Italian UD treebank, i.e. ISDT. They have been chosen since they are representative of language sub-varieties possibly infrequently seen during the NLMs training phase. Accordingly, they can be conceived as a favorite point of view to investigate to which extent general-purpose NLMs are robust against less standard texts. For this purpose, in addition to sub-sections including newspapers (ISDT_tanl) and miscellaneous documents (ISDT_tut), we considered sub-portions including sentences in the interrogative form (ISDT_quest), newspaper articles specifically written to be linguistically simple (ISDT_2parole) and transcriptions of the European parlament oral debates (ISDT_europarl).

Table 2: Sections of the Italian Universal Dependency Treebank (IUDT).

Short Name

Types of texts

# sent

ParTUT

Multi-genre

2,090

VIT

Multi-genre

10,087

ISDT

Multi-genre

14,167

ISDT_tanl

Newswire

4,043

ISDT_tut

Legal/Newswire/Wiki

3,802

ISDT_quest

Interrogative sentences

2,162

ISDT_2parole

Simplified Italian news

1,421

ISDT_europarl

EU Parliament debates

497

PoSTWITA

Tweets

6,713

TWITTIRÒ

Ironic Tweets

1,424

Total

35,481

2.2 Probing features

13The set of probing tasks consists in predicting the value of a specific linguistic feature automatically extracted from the manually revised annotation of each sentence of the IUDT datasets.

14We relied on the set described in (Brunato et al. 2020) that includes about 130 features representative of the linguistic structure underlying a sentence and derived from raw, morpho-syntactic and syntactic levels of annotation. For the specific purpose of this study, we selected the 82 most frequent features in order to prevent data sparsity issues thus making our results reliable.

  • 4 For the list of UD Parts-Of-Speech refer to https://universaldependencies.org/u/pos/index.html, whi (...)

15As shown in Table 3, the considered tasks are intended to probe whether the NLMs encode in their representations 9 main aspects of the structure of a sentence. They range from quite simple aspects related to the knowledge of raw text properties (i.e. sentence and word length), to the vocabulary richness (in terms of type/token ratio), to the distribution of UD and language-specific Parts-Of-Speech4 and of inflectional properties specific in particular to verbal predicates (i.e. mood, tense, person). More challenging probing tasks concern the ability to encode complex aspects of sentence structure, including both global structure, such as the depth of the whole syntactic tree, and local features. We paid a specific attention to testing the models knowledge of the sub-trees of the nuclear elements of a sentence. In this respect, we included a group of features modelling the verbal predicate structure, e.g. in terms of number of dependents of verbal heads, and a group referring to the order of subjects and objects with respect to their verbal head. In line with the focus on specific sub-trees, we also considered a group of features capturing the use of subordination in terms of distribution of subordinate clauses, of their internal structure and relative order with respect to the main clause.

Table 3: Probing Features used in the experiments

Linguistic Feature

Label

Raw Text Properties (RawText)

Sentence Length

sent_length

Word Length

char_per_tok

Vocabulary Richness (Vocabulary)

Type/Token Ratio for words and lemmas

ttr_form, ttr_lemma

Morphosyntactic information (POS)

Distribution of UD and language–specific POS

upos_dist_*, xpos_dist_*

Lexical density

lexical_density

Inflectional morphology (VerbInflection)

Inflectional morphology of lexical verbs and auxiliaries

verbs_*, aux_*

Verbal Predicate Structure (VerbPredicate)

Distribution of verbal heads and verbal roots

verbal_head_dist, verbal_root_perc

Verb arity and distribution of verbs by arity

avg_verb_edges, verbal_arity_*

Global and Local Parsed Tree Structures (TreeStructure)

Depth of the whole syntactic tree

parse_depth

Average length of dependency links and of the longest link

avg_links_len, max_links_len

Average length of prepositional chains and distribution by depth

avg_prep_chain_len, prep_dist_1

Clause length

avg_token_per_clause

Order of elements (Order)

Relative order of subject and object

subj_pre, subj_post, obj_post

Syntactic Relations (SyntacticDep)

Distribution of dependency relations

dep_dist_*

Use of Subordination (Subord)

Distribution of subordinate clauses

subordinate_prop_dist

Average length of subordination chains and distribution by depth

avg_subord_chain_len, subordinate_dist_1

Relative order of subordinate clauses

subordinate_post

16We chose to rely on these linguistic characteristics for two main reasons. Firstly, they have been shown to be highly predictive when leveraged by traditional learning models on a variety of classification problems where the linguistic information plays a fundamental role. In addition, they are multilingual as they are based on the Universal Dependency formalism for sentence representation, which guarantees the comparative encoding of language phenomena across different languages (Nivre 2015). In fact, they have been also used to profile the knowledge encoded in the language representations of a pretrained NLM, specifically the English BERT, and how it changes across layers (Miaschi, Brunato, et al. 2020).

3. Experiments and Results

17In this section we report the results of the three different investigations we carried out starting from the probing strategies devised.

3.1 Comparison of Probing Model Architectures

Table 4: Average \(R^2\) scores for all the NLMs obtained with the LinearSVR and the MLP probing models. Baseline scores for a Linear SVR trained only on sentence length are also reported.

Groups

LinearSVR

MLP

Baseline

RawText

0.84

0.80

0.50

Vocabulary

0.70

0.34

0.19

POS

0.69

0.68

0.03

VerbInflection

0.50

0.61

0.03

VerbPredicate

0.32

0.43

0.08

TreeStructure

0.61

0.64

0.40

Order

0.46

0.55

0.06

SyntacticDep

0.65

0.74

0.04

Subord

0.49

0.60

0.16

AllFeatures

0.60

0.64

0.10

  • 5 The Coefficient of determination (\(R^2\)) is a statistical measure of how close the data are to th (...)

18Our first analysis concerns the comparison of the two considered architectures for probing the linguistic knowledge encoded by the Italian Transformers. Since many of our probing features are strongly related to sentence length, we compared these results with the ones obtained by a baseline corresponding to a LinearSVR model trained using only sentence length as input feature. Table 4 reports average \(R^2\) results5 across all the layers of all the 7 NLMs obtained with the LinearSVR and the MLP probing architectures, along with baseline scores.

19As a first remark, we notice that both probing architectures outperform the sentence length baseline. This suggests that all NLMs encode a spectrum of phenomena that, although related to syntagmatic complexity, require a more sophisticated linguistic knowledge to be accurately predicted. However, if we compare the results achieved by the two architectures on all groups of linguistic phenomena (AllFeatures), we can see that MLP architecture achieves higher \(R^2\) scores. This is specifically the case of the group of features which refer to characteristics of the verb inflectional morphology (VerbInflection) and structure (VerbPredicate) and the use of subordination (Subord), for which the differences between the two architectures is higher. On the contrary, the LinearSVR resulted to be more accurate to probe NLMs’ competences of raw text properties, vocabulary richness and about the distribution of Parts-Of-Speech. Interestingly, the SVR architecture outperforms the MLP by more than .30 \(R^2\) points when predicting features related to vocabulary richness (Vocabulary). The increase in performances observed for the MLP model on syntactic features can be motivated by the presence of nonlinearities in the probing model, which allow the model to capture non-linear relations between learned features. On the other hand, this increase in model capacity seems to hinder the performances of the probe on low level features (RawText, Vocabulary, POS) for which a simple linear combination can be sufficient. Despite this difference, a comparison of the rankings of linguistic phenomena ordered by decreasing scores for the two probing models shows that in both cases raw text properties and the distribution of morpho-syntactic categories (POS) appear in the first positions, while the order of subject and object (Order) and the structure of verbal predicates (VerbPredicate) are found in the lower part of the ranking. This observation suggests that the hierarchy of linguistic information captured by probing models is preserved, regardless of the architectural complexity of the probe. As a matter of fact, if we compute the Spearman correlation (\rho) between the average scores obtained for the 82 linguistic features with the LinearSVR and MLP we obtained a \rho of 0.71, thus indicating a strong correlations between the scores obtained with the two probing models.

Figure 1

Figure 1

Layer-wise average R2 scores obtained by each NLM with the two probing models

Figure 2

Figure 2

Average layerwise R2 scores obtained with the LinearSVR (top) and the MLP (bottom) using the internal representations of the 7 NLMs

20In order to ensure that our probes are actually showing the linguistic generalization abilities of the NLMs rather than learning the linguistic tasks, we also tested the probing models using the control task approach devised in (Hewitt and Liang 2019). We produced a control version of the IUDT corpus by randomly shuffling the linguistic features assigned to each sentence and performed the same probing tasks with the two probing classifiers for all NLMs representations. The correlation and \(R^2\) scores between regressors’ predictions and shuffled scores were low (< 0.05) and comparable for both the SVR and the MLP. These results support the claim that NLMs representations encode information closely related to linguistic competence and that our probing models are not relying on spurious signals unrelated to our linguistic properties to solve the regression task.

3.2 Comparison of Italian Transformers

21To investigate to which extent each transformer encodes the considered set of linguistic phenomena, we compared the performances achieved by the 7 NLMs, using the two probing architectures. Results are reported in Figure 1, where we can notice that the 7 Transformers achieve quite similar results when considering all features as a whole (all). Nevertheless, a more in depth analysis highlights a number of small differences. Namely, we can see that BERT-base-italian is the first and second best model for the MLP and SVR architecture respectively; while the least performing model is AlBERTo using MLP and, for the SVR probing architecture, UmBERTo model trained on the Italian Wikipedia.

22However, this trend does not hold when we analyse the NLMs performances with respect to the encoding of the different groups of linguistic phenomena. For instance, we can notice that, for the two probing architectures, tree structure properties (TreeStructure) are predicted more accurately by RoBERTa-style models, i.e. by GilBERTo and UmBERTo-Commoncrawl, than by models based on BERT or GPT-2. Only for MLP, this can be similarly observed for the prediction of two other linguistic properties referring to sub-trees of the whole syntactic structure of a sentence. Namely, it can be seen that GilBERTo and UmBERTo-Commoncrawl are the two best models able to encode the use of subordination (Subord) and the verb predicate structures (VerbPredicate). Further differences in terms of probing architectures can be inspected considering NLMs abilities to encode competencies related to vocabulary richness (Vocabulary): while UmBERTo-Wikipedia extensively outperforms all the other transformers using the MLP model, the best transformer is BERT-base-italian when these competences are probed with the LinearSVR model.

23Additional observations can be made if we move to the analysis of how NLMs prediction abilities change and evolve across layers. As it can be seen in Figure 2, regardless of the architecture, for all transformers linguistic competences tend to decrease across the 12 layers. This is in line with previous findings (N. F. Liu et al. 2019; Miaschi, Brunato, et al. 2020) and it could be due to the fact that transformer layers trade off between task-oriented (e.g. Masked Language Modeling) information and general linguistic competence. Such decreasing trend can be specifically observed for example for the ability to predict raw text features, or the distribution of the UD morpho-syntactic categories (POS) and syntactic dependencies (SyntacticDep): they represent sentence properties mainly encoded in the first layers by all NLMs. On the contrary, we can observe that there is a number of more complex linguistic features whose knowledge increases consistently across layers, even if it decreases in the output layer. This is the case of features referring to structural sentence knowledge, such as the order of subject/object with respect to the verbal head (Order) and the use of subordination (Subord). In addition, contrarily to what was observed by (Vries, Cranenburgh, and Nissim 2020), Multilingual-BERT’s linguistic knowledge is not encoded systematically earlier than in monolingual transformers.

24This perspective of analysis also reveals other differences among the considered transformers which were unseen. By inspecting the trend of the \(R^2\) scores across layers, we can for example see that even though GePpeTto has a lower average competence on verb inflection (see Figure 1), it achieves the highest scores in the middle layers. Or, even if we previously noted that RoBERTa-style transformers are more able to predict features related to the structure of a sentence (TreeStructure), the highest accuracy is achieved by a BERT-style model, i.e. BERT-base-italian, in the -4 layer. A similar observation also concerns the use of subordination and the verb predicate structure: the two groups of features are in general predicted more accurately by GilBERTo and UmBERTo-Commoncrawl but the highest \(R^2\) scores are achieved by Mulilingual-BERT and BERT-base-italian in the -5 and -4 layers.

25Focusing instead on differences between layerwise scores obtained by the two probing architectures, we can clearly notice that the encoding of linguistic knowledge shows a quite rough trend for what concerns the results obtained with the MLP. This is particularly the case of features belonging to the vocabulary, POS and tree structure groups.

Figure 3

Figure 3

Average R2 scores obtained for each probing features using the two probing architectures tested with the internal representations of the 7 NLMs. Both heatmaps are ordered on the basis of the feature ranking as predicted by the AlBERTo model using the LinearSVR architecture.

Figure 4

Figure 4

Average LinearSVM R2 score considering all the UD Italian sentences (all) and according to the 10 treebanks previously described.

26If we deepen our investigation and we focus on the linguistic generalization ability of the NLMs with respect to each individual feature (see Figure 3), we can clearly observe that the rankings according to \(R^2\) scores are quite similar regardless the probing architecture and the transformer model. It is also interesting to note that, despite some deviations, the distinction into macro-groups of linguistic phenomena seems to be mostly preserved across the rankings. In fact, raw-text features, as well as the distributions of POS-tags (upos_dist_, xpos_dist_) and dependency relations (dep_dist_), are those that were better predicted by the two probing models, while features more related to the structural information of a sentence, such as the order of elements (e.g. subj_pre, subj_post and obj_post) or the structure of parsed tree (e.g. avg_token_per_clause, avg_prep_chain_len) achieved lower probing scores. Lower results also concern the prediction of the morphological features of lexical and auxiliary verbs, namely for example their mood (verb_mood_) or tense (verb_tense_).

27In line with what observed in Figure 1, we can see that in few cases the linguistic competence of the AlBERTo model is significantly different (lower) from that of the other models. The most remarkable case concerns the distribution of punctuation marks in general, both at the level of morpho-syntactic category (upos_dist_PUNCT), dependency relation (dep_dist_punct), and more specifically considering the distribution of commas (xpos_dist_FF) and balanced punctuation (xpos_dist_FB). This appears particularly evident using MLP as probing architecture and it is possibly related to the typology of texts the AlBERTo model was trained on, i.e. Twitter. It is well known that social media represents a non standard language variety, characterised by specific linguistic properties mostly different from ordinary language (Farzindar and Inkpen 2015), such as short sentences where punctuation marks, especially weak ones, are rarely used. Accordingly, the low frequency of punctuation in the training corpus possibly yields AlBERTo’s reduced generalization abilities with respect to this specific set of features.

3.3 Comparison of Italian Language Varieties

28Our last analysis concerns the impact of the considered Italian language varieties on NLMs linguistic abilities. For this purpose, we inspected whether the overall linguistic competence encoded in the contextual representations of each model changes according to the different IUDT sections. The results reported in Figure 4 show that all transformers, regardless of the probing architecture, achieve lower performance when they have to predict the value of features extracted from treebanks representative of social media language (PoSTWITA and TWITTIRÒ) and from the sub-set of ISDT sentences in the interrogative form (ISDT_quest). In both cases, this seems supporting our starting intuition that NLMs trained on standard language varieties, represented for example by Wikipedia pages, websites or web-crawled documents, may be less robust to non-standard varieties that were possibly unseen, or rarely seen, during the pre-training process. Quite surprisingly, even if AlBERTo has been trained on Twitter data, it obtains the lowest \(R^2\) scores also when its internal representations are used to predict the feature values of the two social media Italian treebanks. A possible explanation is that, although PoSTWITA and TWITTIRÒ contain sentences representative of Twitter language, these sentences are still quite close to the Italian standard language, in order to be compliant with the UD morpho-syntactic and syntactic annotation schema. On the contrary, AlBERTo’s training set is derived from Twitter’s official streaming API that included all possible typologies of sentences.

29It also worth noting that BERT-base-italian and GePpeTto are the two models slightly less affected by the non-standard linguistic peculiarities of the social media variety. As noted in Section 3.2, they represent the two best performing models in terms of overall linguistic competence. This may explain why they are more robust in the accurate prediction of the features values of all the considered IUDT sections. This holds both with the LinearSVR and MLP probing architecture, even if in the latter case the two versions of UmBERTo achieve comparable or slightly better scores. A main exception is represented by the ISDT sub-section including sentences in the interrogative form (ISDT_quest), which, as we noted above, are hardly mastered by all models. This is possible due to the fact that interrogative sentences are more likely to display a less canonical distribution of morpho-syntactic and syntactic phenomena, hence being more difficult to encode effectively. In this case, the transformer based on GPT-2, i.e. GePpeTto, results to be the NLM with the highest linguistic knowledge of this type of sentences.

Table 5: Spearman correlations between rankings of features as predicted by the 7 NLMs on four sections of the IUDT treebank: IUDT_2parole (2par), IUDT_tanl (tanl), IUDT_quest (quest) and IUDT_postwita (ptw). Highest correlations are bolded, while lowest ones are marked in italics.

Model

Section

LinearSVR

MLP

2 par

tanl

quest

ptw

2parole

tanl

quest

ptw

alberto

2par

1

1

tanl

.72

1

.85

1

quest

.38

.38

1 .62 .

56

1

ptw

.76

.82

.45

1

.75 .

80

.58

1

bert-base-italian

2par

1

1

tanl

.68

1

.82

1

quest

.34

.41

1

.62

.47

1

ptw

.72

.91

.4

1

.75

.88

.47

1

geppetto

2par

1

1

tanl

.65

1

.80

1

quest

.30

.38

1

.64

50

1

ptw

.70

.92

.48

1

.72

.88

.47

1

gilberto

2par

1

1

tanl

.61

1

.77

1

quest

.30

.40

1

.58

54

1

ptw

.66

.88

.46

1

.69

.82

.49

1

mbert

2par

1

1

tanl

.65

1

.76

1

quest

.30

.37

1

.55

.47

1

ptw

.71

.90

.45

1

.71

.83

.46

1

umberto-commoncr.

2par

1

1

tanl

.58

1

.71

1

quest

.28

.33

1

.55

.47

1

ptw

.69

.8

.39

1

.65

.75

.35

1

umberto-wikipedia

2par

1

1

tanl

.57

1

.70

1

quest

-

-

1

.50 .

44

1

ptw

.66

.72

.36

1

.69

.72

.36

1

30A further analysis of the impact of language varieties on the ability of NLMs to encode the considered group of linguistic phenomena can be appreciated in Table 5. It shows, for each probing architecture, the Spearman correlations between the rankings of features predicted by all NLMs considering three ISDT sub-sections, i.e. ISDT_tanl, ISDT_2parole and ISDT_quest, and PoSTWITA, and ordered by decreasing \(R^2\) scores. For each NLM, higher correlations correspond to similar linguistic generalization abilities across the paired treebanks, while lower correlations suggest that the inner representations of the NLM allow predicting effectively diverse linguistic features. As we can see, regardless of the probing architecture, for all NLMs, the highest correlated rankings are those obtained comparing ISDT_tanl (tanl) and PoSTWITA (ptw) predicted features. Even if it is quite surprising, this result can be explained assuming that the morpho-syntactic and syntactic features of the Twitter sentences contained in PoSTWITA are not so dramatically different from those characterising ISDT_tanl newspaper articles. In fact, among all the IUDT sections considered here they resulted to be the two most similar treebanks with respect to the distribution of the set of linguistic features reported in Table 3. In particular, the main differences concern the distribution of some morpho-syntactic categories (e.g. punctuation, nouns) and main features related to the inflectional morphology of verbs, e.g. the distribution of present tenses, higher in PoSTWITA (51.11% out of the total verb tenses) than in ISDT_tanl (34.95%), or of the past tenses that in the Twitter sentences are less than half than in the newspaper ones. Interestingly, these characteristics belong to the group of features that the NLMs are able to master quite accurately, regardless of the language variety. Even if these differences had a negative impact on the overall probing abilities of the PoSTWITA sentence characteristics, as shown in Figure 4, the higher knowledge of these specific features did not possibly have a great consequence on the ranking of the predicted features, thus yielding quite high correlations.

31On the contrary, the lowest correlations can be observed when we compare the rankings obtained for the pairs of treebanks containing the set of sentences in the interrogative form, i.e. ISDT_quest (quest). Even if the correlation values are slightly higher using MLP, this trend holds for the two probing architectures and for all NLMs. Note that the correlations between the ranking obtained with UmBERTo-Wikipedia for the pairs ISDT_quest/ISDT_2parole and ISDT_quest/ISDT_tanl are even not statistically significant. Let us remind that this is the NLM that achieved the lowest prediction accuracy using the LinearSVR probing architecture (see Figure 1). Our intuition is that this may have made it less robust in the prediction of non-standard linguistic forms, such as interrogative sentences. Similarly to what aforementioned, these results can be explained if we analyse the feature values in the considered treebanks. ISDT_quest resulted to be quite different from all the other treebanks particularly with respect to complex aspects of sentence structure. For example, the canonical order of the nuclear elements of a sentence (i.e. subject and object) is largely subverted in sentences in the interrogative form. Thus, they contain a very high percentage of post-verbal explicit subjects (68.69% of the total), half an order of magnitude higher than ISDT_tanl (15.21%) and PoSTWITA (12.63%) and an order of magnitude higher than ISDT_2parole (7.55%). Sentences in the interrogative form also have a lower percentage of post-verbal objects (17.31%), which instead represent the majority of cases in other treebanks, and they are characterised by a very low distribution of subordinate clauses in general and in particular of subordinates following the principal clause, i.e. 4% vs. 43% in ISDT_tanl, 35.78% ISDT_2parole and 44.36%. These and other similar features all concern structural aspects of a sentence that may have undermined the overall NLM linguistic competence thus yielding not only lower probing scores on ISDT_quest but also different feature rankings with respect to the other treebanks.

4. Conclusion

32In this paper we presented an in-depth comparative investigation of the linguistic knowledge encoded in the Italian transformer models. Relying on a suite of 82 probing features and on two different probing architectures, we performed a number of complementary investigations all tested on different sections of the Italian Universal Dependency Treebank (IUDT), representative of diverse textual genres and language varieties.

33Firstly, we showed experimentally how non-linear architectures such as the multi-layer perceptron (MLP) capture a broader range of information encoded in learned representations with respect to their linear counterparts, and as such they can be considered more suitable for studying highly nonlinear models such as NLM. In this sense, our results support the information-theoretic operationalization of probing proposed by (Pimentel et al. 2020). However, the rankings of this and of the LinearSVR model in terms of their probing ability are quite similar. Namely, both are particularly able to probe raw text properties, as well as the distribution of Parts-Of-Speech and dependency relations; while they obtained lower scores for features referring to the order of subject and object with respect their verbal head and to the verbal predicate structure.

34The following comparison of the linguistic generalization abilities of the 7 Transformers showed that if we analyse the results considering all the probing features as a whole few differences can be observed. Similarly to what observed for English (N. F. Liu et al. 2019) and Dutch (Vries, Cranenburgh, and Nissim 2020), we showed that regardless of the probing architecture, for all transformers the internal layers (i.e. -6/-4) are the most informative ones and the linguistic competences tend to decrease across the 12 layers. However, contrary to (Vries, Cranenburgh, and Nissim 2020) our findings reveal that Multilingual-BERT’s linguistic knowledge is not encoded systematically earlier than in monolingual transformers. More interesting outcomes result when we focus on the embedded knowledge of each group of linguistic characteristics. We noticed for example that global and local tree structure properties are predicted more accurately by RoBERTa-style models, i.e. by GilBERTo and UmBERTo-Commoncrawl, than by models based on BERT or GPT-2. We obtained additional information when we narrowed our analysis on how NLMs prediction abilities evolve across models’ layers, showing for example that the highest competence about the tree structure is achieved by a BERT-style model, i.e. BERT-base-italian, in the -4 layer. A more in-depth comparison with respect to the ranking of each individual feature by \(R^2\) scores also revealed that, even if the 7 Transfomers are quite similar, a main exception is represented by the AlBERTo model. In particular, it showed to have reduced generalization abilities concerning the use of punctuation. Our intuition is that it is possibly related to the typology of texts the AlBERTo model was trained on, i.e. Twitter, where punctuation marks are rarely used.

35Finally, we showed that the level of NLMs linguistic competence changes according to the diverse linguistic varieties of IUDT. All Transformers resulted to be less robust in the prediction of the linguistic properties characterising sentences representative of social media language and of sentences in the interrogative form. This is possible due to the fact that the two types of sentences are characterised by non-canonical distribution of morpho-syntactic and syntactic phenomena, possibly rarely or never seen during the training phase. Surprisingly, also the AlBERTo model, even if it was trained on Twitter data, achieved very low performances, while on the contrary, BERT-base-italian and GePpeTto are the two models slightly less affected by the non-standard linguistic varieties. Despite both social media and questions seem representing two quite challenging testbeds, our in-depth investigation of how each probing feature is ranked by the NLMs allowed highlighting noteworthy differences. We observed that the most diverse rankings concern the test on the sentences in the interrogative form, which result to be characterised by distributions of structural aspects very different from other IUDT sections.

36In terms of present and future research directions, we are currently investigating how the relation between the linguistic knowledge encoded by a NLM positively affects the resolution of downstream tasks, following recent works highlighting the tendency of pretrained NLMs to lose general linguistic information during the fine-tuning process and the connection between encoded linguistic information and models’ downstream performances for the English language (Miaschi, Brunato, et al. 2020; Sarti, Brunato, and Dell’Orletta 2021). These connections, which are still sporadically investigated at the moment, can cast a light on the decision process inside NLMs, and ultimately lead to an improved understanding and utilization of these systems for real-world usage.

Top of page

Bibliography

Alain, Guillaume, and Yoshua Bengio. 2017. “Understanding Intermediate Layers Using Linear Classifier Probes.” In Workshop Track of the Fifth International Conference on Learning Representations (Iclr 2017). Toulon, France. https://openreview.net/forum?id=HJ4-rAVtl.

Baroni, Marco, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. “The Wacky Wide Web: A Collection of Very Large Linguistically Processed Web-Crawled Corpora.” Language Resources and Evaluation 43 (3): 209–26.

Basile, Valerio, Mirko Lai, and Manuela Sanguinetti. 2018. “Long-Term Social Media Data Collection at the University of Turin.” In Proceedings of the Fifth Italian Conference on Computational Linguistics (Clic-It 2018), edited by Tommaso Caselli, Nicole Novielli, Viviana Patti, and Paolo Rosso, 2263:1–6. Turin, Italy: CEUR Workshop Proceedings.

Belinkov, Yonatan, and James Glass. 2019. “Analysis Methods in Neural Language Processing: A Survey.” Transactions of the Association for Computational Linguistics 7 (April): 49–72. https://doi.org/10.1162/tacl_a_00254.

Belinkov, Yonatan, Lluı́s Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. “Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks.” In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 1–10. Taipei, Taiwan: Asian Federation of Natural Language Processing. https://aclanthology.org/I17-1001.

Blevins, Terra, Omer Levy, and Luke Zettlemoyer. 2018. “Deep RNNs Encode Soft Hierarchical Syntax.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 14–19. Melbourne, Australia: Association for Computational Linguistics. https://doi.org/10.18653/v1/P18-2003.

Bosco, Cristina, Simonetta Montemagni, and Maria Simi. 2013. “Converting Italian Treebanks: Towards an Italian Stanford Dependency Treebank.” In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, 61–69. Sofia, Bulgaria: Association for Computational Linguistics. https://aclanthology.org/W13-2308.

Brunato, Dominique, Andrea Cimino, Felice Dell’Orletta, Giulia Venturi, and Simonetta Montemagni. 2020. “Profiling-Ud: A Tool for Linguistic Profiling of Texts.” In Proceedings of the 12th Language Resources and Evaluation Conference, 7147–53. Marseille, France: European Language Resources Association. https://www.aclweb.org/anthology/2020.lrec-1.883.

Cignarella, Alessandra Teresa, Cristina Bosco, and Paolo Rosso. 2019. “Presenting TWITTIRÒ-UD: An Italian Twitter Treebank in Universal Dependencies.” In Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, Syntaxfest 2019), 190–97. Paris, France: Association for Computational Linguistics. https://doi.org/10.18653/v1/W19-7723.

Conneau, Alexis, German Kruszewski, Guillaume Lample, Loı̈c Barrault, and Marco Baroni. 2018. “What You Can Cram into a Single $&!#* Vector: Probing Sentence Embeddings for Linguistic Properties.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2126–36. Melbourne, Australia: Association for Computational Linguistics. https://doi.org/10.18653/v1/P18-1198.

Delmonte, Rodolfo, Antonella Bristot, and Sara Tonelli. 2007. “VIT - Venice Italian Treebank: Syntactic and Quantitative Features.” In Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories. Bergen, Norway.

De Mattei, Lorenzo, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, and Marco Guerini. 2020. “GePpeTto Carves Italian into a Language Model.” In Proceedings of the Seventh Italian Conference on Computational Linguistics (Clic-It 2020), edited by Felice Dell’Orletta, Johanna Monti, and Fabio Tamburini, 136–43. Bologna, Italy (Online). https://doi.org/10.4000/books.aaccademia.8203.

Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–86. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1423.

Farzindar, Atefeh, and Diana Inkpen. 2015. Natural Language Processing for Social Media. Synthesis Lectures on Human Language Technologies, Morgan & Claypool.

Guarasci, Raffaele, Stefano Silvestri, Giuseppe De Pietro, Hamido Fujita, and Massimo Esposito. 2021. “Assessing BERT’s Ability to Learn Italian Syntax: A Study on Null-Subject and Agreement Phenomena.” Journal of Ambient Intelligence and Humanized Computing, 1–15.

Guarasci, Raffaele, Stefano Silvestri, Giuseppe De Pietro, Hamido Fujita, and Massimo Esposito. 2022. “BERT Syntactic Transfer: A Computational Experiment on Italian, French and English Languages.” Computer Speech & Language 71. https://doi.org/https://doi.org/10.1016/j.csl.2021.101261.

Hewitt, John, and Percy Liang. 2019. “Designing and Interpreting Probes with Control Tasks.” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (Emnlp-Ijcnlp), 2733–43. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1275.

Hewitt, John, and Christopher D. Manning. 2019. “A Structural Probe for Finding Syntax in Word Representations.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4129–38. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1419.

Hochreiter, Sepp, and Jürgen Schmidhuber. 1997. “Long Short-Term Memory.” Neural Computation 9 (8): 1735–80.

Jawahar, Ganesh, Benoı̂t Sagot, and Djamé Seddah. 2019. “What Does BERT Learn About the Structure of Language?” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3651–7. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1356.

Liu, Nelson F., Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. “Linguistic Knowledge and Transferability of Contextual Representations.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 1073–94. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1112.

Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “RoBERTa: A Robustly Optimized Bert Pretraining Approach.” arXiv Preprint arXiv:1907.11692.

Miaschi, Alessio, Dominique Brunato, Felice Dell’Orletta, and Giulia Venturi. 2020. “Linguistic Profiling of a Neural Language Model.” In Proceedings of the 28th International Conference on Computational Linguistics, 745–56. Barcelona, Spain (Online): International Committee on Computational Linguistics. https://doi.org/10.18653/v1/2020.coling-main.65.

Miaschi, Alessio, Gabriele Sarti, Dominique Brunato, Felice Dell’Orletta, and Giulia Venturi. 2020. “Italian Transformers Under the Linguistic Lens.” In Proceedings of the Seventh Italian Conference on Computational Linguistics (Clic-It 2020), edited by Johanna Monti, Felice Dell’Orletta, and Fabio Tamburini. Online: CEUR.org.

Nivre, Joakim. 2015. “Towards a Universal Grammar for Natural Language Processing.” In Computational Linguistics and Intelligent Text Processing, edited by Alexander Gelbukh, 3–16. New York: Springer.

Ortiz Suárez, Pedro Javier, Benoît Sagot, and Laurent Romary. 2019. “Asynchronous Pipelines for Processing Huge Corpora on Medium to Low Resource Infrastructures.” In Proceedings of the Workshop on Challenges in the Management of Large Corpora (Cmlc-7) 2019, edited by Piotr Bański, Adrien Barbaresi, Hanno Biber, Evelyn Breiteneder, Simon Clematide, Marc Kupietz, Harald Lüngen, and Caroline Iliadi, 9–16. Cardiff: Leibniz-Institut für Deutsche Sprache. https://doi.org/10.14618/ids-pub-9021.

Pimentel, Tiago, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. “Information-Theoretic Probing for Linguistic Structure.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4609–22. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.420.

Polignano, Marco, Pierpaolo Basile, Marco de Gemmis, Giovanni Semeraro, and Valerio Basile. 2019. “AlBERTo: Italian Bert Language Understanding Model for Nlp Challenging Tasks Based on Tweets.” In Proceedings of the Sixth Italian Conference on Computational Linguistics (Clic-It 2019), edited by Raffaella Bernardi, Roberto Navigli, and Giovanni Semeraro. Bari, Italy.

Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models Are Unsupervised Multitask Learners.” Technical Report.

Sanguinetti, Manuela, and Cristina Bosco. 2015. “PartTUT: The Turin University Parallel Treebank.” In Harmonization and Development of Resources and Tools for Italian Natural Language Processing Within the PARLI Project, edited by Roberto Basili et al., 51–69. Springer. https://link.springer.com/book/10.1007/978-3-319-14206-7.

Sanguinetti, Manuela, Cristina Bosco, Alberto Lavelli, Alessandro Mazzei, Oronzo Antonelli, and Fabio Tamburini. 2018. “PoSTWITA-UD: An Italian Twitter Treebank in Universal Dependencies.” In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Miyazaki, Japan: European Language Resources Association (ELRA). https://aclanthology.org/L18-1279.

Sarti, Gabriele, Dominique Brunato, and Felice Dell’Orletta. 2021. “That Looks Hard: Characterizing Linguistic Complexity in Humans and Language Models.” In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 48–60. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.cmcl-1.5.

Tenney, Ian, Dipanjan Das, and Ellie Pavlick. 2019. “BERT Rediscovers the Classical NLP Pipeline.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4593–4601. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1452.

Tenney, Ian, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, et al. 2019. “What Do You Learn from Context? Probing for Sentence Structure in Contextualized Word Representations.” In Proceedings of the Seventh International Conference on Learning Representations (Iclr 2019). New Orleans, Louisiana, USA.

Tiedemann, Jörg, and Lars Nygaard. 2004. “The OPUS Corpus - Parallel and Free: https://aclanthology.org/L04-1174/.” In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04). Lisbon, Portugal: European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2004/pdf/320.pdf.

Vries, Wietse de, Andreas van Cranenburgh, and Malvina Nissim. 2020. “What’s so Special About BERT’s Layers? A Closer Look at the NLP Pipeline in Monolingual and Multilingual Models.” In Findings of the Association for Computational Linguistics: EMNLP 2020, 4339–50. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.findings-emnlp.389.

Zeman, Daniel, Joakim Nivre, Mitchell Abrams, and al. 2019. “Universal Dependencies 2.5.” In LINDAT/CLARIAH-CZ Digital Library at the Institute of Formal and Applied Linguistics (ÚFAL). http://hdl.handle.net/11234/1-3105.

Zhang, Kelly, and Samuel Bowman. 2018. “Language Modeling Teaches You More Than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis.” In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 359–61. Brussels, Belgium: Association for Computational Linguistics. https://doi.org/10.18653/v1/W18-5448.

Top of page

Notes

1 https://github.com/dbmdz/berts

2 https://github.com/idb-ita/GilBERTo

3 https://github.com/musixmatchresearch/umberto

4 For the list of UD Parts-Of-Speech refer to https://universaldependencies.org/u/pos/index.html, while for the language-specific one to http://www.italianlp.it/docs/ISST-TANL-POStagset.pdf

5 The Coefficient of determination (\(R^2\)) is a statistical measure of how close the data are to the fitted regression line and corresponds to the proportion of the variance in the dependent variable that is predictable from the independent variable(s).

Top of page

List of illustrations

Title Figure 1
Caption Layer-wise average R2 scores obtained by each NLM with the two probing models
URL http://journals.openedition.org/ijcol/docannexe/image/965/img-1.jpg
File image/jpeg, 149k
Title Figure 2
Caption Average layerwise R2 scores obtained with the LinearSVR (top) and the MLP (bottom) using the internal representations of the 7 NLMs
URL http://journals.openedition.org/ijcol/docannexe/image/965/img-2.jpg
File image/jpeg, 183k
Title Figure 3
Caption Average R2 scores obtained for each probing features using the two probing architectures tested with the internal representations of the 7 NLMs. Both heatmaps are ordered on the basis of the feature ranking as predicted by the AlBERTo model using the LinearSVR architecture.
URL http://journals.openedition.org/ijcol/docannexe/image/965/img-3.jpg
File image/jpeg, 441k
Title Figure 4
Caption Average LinearSVM R2 score considering all the UD Italian sentences (all) and according to the 10 treebanks previously described.
URL http://journals.openedition.org/ijcol/docannexe/image/965/img-4.jpg
File image/jpeg, 147k
Top of page

References

Electronic reference

Alessio Miaschi, Gabriele Sarti, Dominique Brunato, Felice Dell’Orletta and Giulia Venturi, “Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties”IJCoL [Online], 8-1 | 2022, Online since 01 July 2022, connection on 15 January 2025. URL: http://journals.openedition.org/ijcol/965; DOI: https://doi.org/10.4000/ijcol.965

Top of page

About the authors

Alessio Miaschi

Department of Computer Science, Università di Pisa; Istituto di Linguistica Computazionale “Antonio Zampolli”, CNR, Pisa - ItaliaNLP Lab. E-mail: ale.miaschi@gmail.com

Gabriele Sarti

Center for Language and Cognition, University of Groningen. E-mail: g.sarti@rug.nl

By this author

Dominique Brunato

Istituto di Linguistica Computazionale “Antonio Zampolli”, CNR, Pisa - ItaliaNLP Lab. E-mail: dominique.brunato@ilc.cnr.it

By this author

Felice Dell’Orletta

Istituto di Linguistica Computazionale “Antonio Zampolli”, CNR, Pisa - ItaliaNLP Lab. E-mail: felice.dellorletta@ilc.cnr.it

By this author

Giulia Venturi

Istituto di Linguistica Computazionale “Antonio Zampolli”, CNR, Pisa - ItaliaNLP Lab. E-mail: giulia.venturi@ilc.cnr.it

By this author

Top of page

Copyright

CC-BY-NC-ND-4.0

The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search