1Compositional distributional semantics is a flourishing research area that leverages distributional semantics (see (Baroni and Lenci 2010)) to produce meaning of simple phrases and full sentences (hereafter called text fragments). The aim is to scale up the success of word-level relatedness detection to longer fragments of text. Determining similarity or relatedness among sentences is useful for many applications, such as multi-document summarization, recognizing textual entailment (Dagan et al. 2013), and semantic textual similarity detection (Agirre et al. 2013). Compositional distributional semantics models (CDSMs) are functions mapping text fragments to vectors (or higherorder tensors). Functions for simple phrases directly map distributional vectors of words to distributional vectors for the phrases (Mitchell and Lapata 2008; Baroni and Zamparelli 2010; Zanzotto et al. 2010). Functions for full sentences are generally defined as recursive functions over the ones for phrases (Socher et al. 2011). Distributional vectors for text fragments are then used as input in larger machine learning algorithm, for example as layers in neural networks, or to compute similarity among text fragments directly via dot product or cosine similarity.
2CDSMs generally exploit structured representations tx of text fragments x to derive their meaning, in the form of a vector of real number f(tx). The structural information, although extremely important, is only used to guide the composition process, but it is obfuscated in the final vectors. Structure and meaning can interact in unexpected ways when computing cosine similarity (or dot product) between vectors of two text fragments, as shown for full additive models in (Ferrone and Zanzotto 2013).
3Smoothed tree kernels (STK) are instead a family of kernels which realize a clearer interaction between structural information and distributional meaning (Croce, Moschitti, and Basili 2011; Mehdad, Moschitti, and Zanzotto 2010). STKs are specific realizations of convolution kernels (Haussler 1999) where the similarity function is recursively (and, thus, compositionally) computed. Distributional vectors are used to represent word meaning in computing the similarity among nodes. STKs, however, are not considered part of the CDSMs family, in fact, as usual in kernel machines (Cristianini and Shawe-Taylor 2000), STKs directly compute the similarity between two text fragments x and y over their tree representations tx and ty, that is, STK(tx,ty). Because STK is a valid kernel, there exist a function f : T → Rn such that:
\[STK(t^{x}, t^{y}) = \left \langle f(t^{x}), f(t^{y}) \right \rangle\]
4However, the function f that maps trees into vectors is never explicity used, and, thus, STK(tx,ty) is not explicitly expressed as the dot product or the cosine between f(tx) and f(ty).
5Such a function f, which is the underlying reproducing function of the kernel (Aronszajn 1950), would be a CDSM in its own right, since it maps trees to vectors, also including distributional meaning. However, the huge dimensionality of ℝn (since it has to represent the set of all possible subtrees) prevents to actually compute the function f(t), which thus can only remain implicit.
6Distributed tree kernels (DTK) (Zanzotto and Dell’Arciprete 2012a) partially solve the last problem. DTKs approximate standard tree kernels (such as (Collins and Duffy 2002)) by defining an explicit function DT that maps trees to vectors in ℝm where m ≪ n and ℝn is the explicit space for tree kernels. DTKs approximate standard tree kernels (TK), that is,
\[\left \langle DT(t^{x}), DT(t^{y}) \right \rangle \approx TK(t^{x}, t^{y}) \]
by approximating the corresponding reproducing function. In this sense distributed trees are low-dimensional vectors that encode structural information. In DTKs tree nodes u and v are represented by nearly orthonormal vectors, that is, vectors u and v such that: ⟨u, v⟩ ≈ δ(u, v) where δ is the Kroneker’s delta function, defined as:
\[\delta (\mathbf{u},\mathbf{v}) = \left\{ {\begin{array}{*{20}{c}}{1\;\mathrm{if}\;\mathbf{u} = \mathbf{v}}\\{0\;\mathrm{if}\;\mathbf{u} ≠ \mathbf{v}}\end{array}} \right.\]
This is in contrast with distributional semantics vectors where the dot product ⟨u, v⟩is allowed to take on any value in [0,1] according to the semantic similarity between the words u and v.
7In this paper, leveraging on distributed trees, we present a novel class of CDSMs that encode both structure and distributional meaning: the distributed smoothed trees (DST). DSTs encode both structure and distributional meaning in a rank-2 tensor (a matrix): one dimension encodes the structure and one dimension encodes the meaning. By using DSTs to compute the similarity among sentences with a generalized dot product (or cosine), we implicitly define the distributed smoothed tree kernels (DSTK) which approximate the corresponding STKs.
8We present two DSTs along with the two smoothed tree kernels (STKs) that they approximate.
9We experiment with our DSTs to show that their generalized dot products approximate STKs by directly comparing the produced similarities and by comparing their performances on two tasks: recognizing textual entailment (RTE) and semantic similarity detection (STS). Both experiments show that the dot product on DSTs approximates STKs and, thus, DSTs encode both structural and distributional semantics of text fragments in tractable rank-2 tensors. Experiments on STS and RTE show that distributional semantics encoded in DSTs increases performance over structure-only kernels.
10DSTs are the first positive way of taking into account both structure and distributional meaning in CDSMs.
11The rest of the paper is organized as follows. Section 2 introduces the necessary background on distributed trees (Zanzotto and Dell’Arciprete 2012a) used in the rest of the paper, 3.1 introduces the basic notation used in the paper. Section 3 describe our distributed smoothed trees as compositional distributional semantic models that can represent both structural and semantic information. Section 5 reports on the experiments. Finally, Section 6 draws some conclusions and possibilities for future works.
12Encoding Structures with Distributed Trees (Zanzotto and Dell’Arciprete 2012b) (DT) is a technique to embed the structural information of a syntactic tree into a dense, lowdimensional vector of real numbers. DT were introduced in order to allow one to exploit the modelling capacity of tree kernels (Collins and Duffy 2001) but without their computational complexity. More specifically for each tree kernel TK (Aiolli, Da San Martino, and Sperduti 2009; Collins and Duffy 2002; Vishwanathan and Smola 2002; Kimura et al. 2011) there is a corresponding distributed tree function (Zanzotto and Dell’Arciprete 2012b) which maps from trees to vectors:
\[\begin{align*}\mathrm{DT}: T &\rightarrow \mathbb{R}^{^d} \\ t &\mapsto \mathrm{DT}(t)= \mathrm{t} \end{align*}\]
such that:
\[\tag{1}\left \langle \mathrm{DT}(t_{1}), \mathrm{DT}(t_{2}) \right \rangle \approx \mathrm{TK}(t_{1}, t_{2})\]
where t ∈ T is a tree, ⟨⋅,⋅⟩ indicates the standard inner product in ℝd and TK(⋅,⋅) represents the original tree kernel. It has been shown that the quality of the approximation depends on the dimension d of the embedding space ℝd.
13To approximate tree kernels, distributed trees use the following property and intuition. It is possible to represent subtrees τ ∈ S(t) of a given tree t in distributed tree fragments DTF(τ) ∈ ℝd such that:
\[\tag{2}\left \langle \mathrm{DTF}(\tau_{1}), \mathrm{DTF}(\tau_{2}) \right \rangle \approx \delta (\tau_{1}, \tau_{2})\]
14Where δ is the Kronecker’s delta function. With this definition we can define the distributed tree of a given tree t as a summation over all of its subtrees, that is:
\[\mathrm{DT}(t) = \sum_{\tau \in S(t)}\sqrt{\lambda}^{\left |\mathit{N}(\tau ) \right |}\mathrm{DTF}(\tau )\]
where λ is the classical decaying factor in tree kernels (Collins and Duffy 2002), used to penalize the importance given to longer tree, and |𝒩(τ)| is the cardinality of the set of the nodes of the subtree τ. With this definition in place one can show that the property in Equation 1 holds.
15Distributed tree fragments are defined as follows. To each node label n we associate a random vector n drawn randomly from the d-dimensional hypersphere. Random vectors of high dimensionality have the property of being quasi-orthonormal (that is, they obey a relationship similar to equation (2)). The following functions are then defined:
\[\mathrm{DTF}(\tau ) = \bigodot_{n \in N(\tau )}\mathrm{n}\]
- 1 The circular convolution between a and b is defined as the vector c with component \(c_{i} = \sum_{ (...)
where ⊙ indicates the shuffled circular convolution operation 1, which has the property of preserving quasi-orthonormality between vectors.
16To actually compute distributed trees in an efficient manner however, a different (equivalent) formulation is used. Firstly we define a function SN(n) for each node n in a tree t that collects all the distributed tree fragments of t, where n is its head:
\[\tag{3} \mathrm{SN}(n) = \begin{cases} 0\; \text{if}\; n \; \text{is terminal}\\ \mathrm{n}\odot \bigodot_{i}\sqrt{\lambda} \left [ \mathrm{n}_{i}+SN(n_{i}) \right ] \; \text{otherwise} \end{cases}\]
where ni are the direct children of n in the tree t. Given S(n), distributed trees can be efficiently computed as:
\[\mathrm{DT}(t) = \sum_{n \in N}\mathrm{SN}(n)\]
17In the next section we will finally generalize the ideas of DTK in order to also include semantic information.
18We here propose a model that can be considered a compositional distributional semantic model as it transforms sentences into matrices (which can also be seen as vectors, once they have been “flattened”) that can then used by the learner as feature vectors. Our model is called Distributed Smoothed Tree Kernel (Ferrone and Zanzotto 2014) as it mixes the distributed trees which we introduced in the previous section (Zanzotto and Dell’Arciprete 2012a) representing syntactic information with distributional semantic vectors representing semantic information, as used in the smoothed tree kernels (Croce, Moschitti, and Basili 2011).
19Before describing the distributed smoothed trees (DST) we introduce a formal way to denote constituency-based lexicalized parse trees, as DSTs exploit this kind of data structures.
20Lexicalized trees are denoted with the letter t and N(t) denotes the set of non terminal nodes of tree t. Each non-terminal node n ∈ N(t) has a label ln composed of two parts ln = (sn,wn): sn is the syntactic label, (for example NP, VP, S, and so forth) while wn is the semantic headword of the tree headed by n, along with its part-of-speech tag. The semantic headwords are derived with the Stanford Parser implementation of Collins’ rules (Collins 1999).
Figure 1. A lexicalized tree
Figure 2. Subtrees of the tree t in figure (1) (a non-exhaustive list)
Terminal nodes of trees are treated differently, these nodes represent only words wn without any additional information, and their labels thus only consist of the word itself. An example of such a structure can be seen in figure (1).
21The structure of a DST is represented as follows: Given a tree t, we will use h(t) to indicate its root node and s(t) to indicate its syntactic part. That is, s(t) is the tree derived from t but considering only the syntactic structure (that is, only the sn part of the labels). For example the tree in figure (1) is mapped to the tree:
Figure 1 bis
22We will also use ci(n) to denote i-th child of a node n. As usual for constituencybased parse trees, pre-terminal nodes are nodes that have a single terminal node as child. Finally, we use wn ∈ ℝk to denote the distributional vector for word wn.
23We describe here the approach in a few sentences. In line with tree kernels over structures (Collins and Duffy 2002), we introduce the set S(t) of the subtrees ti of a given lexicalized tree t. A subtree ti is in the set S(t) if s(ti) is a subtree of s(t) and, if n is a node in ti, all the siblings of n in t are in ti. For each node of ti we only consider its syntactic label sn, except for the head h(ti) for which we also consider its semantic component wn (see Fig. 2).
24In analogy with equation (2) the functions DSTs we define compute the following sum:
\[\mathrm{DST}(t) = \mathbf{T} = \sum_{t_{i}\in S(t)}\mathbf{T}_{i}\]
where Ti is the matrix associated to each subtree ti (how this matrix is computed will be explained in the following).
25The similarity between two text fragments a and b represented as lexicalized trees ta and tb can be then computed using the Frobenius product between the two matrices Ta and Tb, that is:
\[\tag{4}\mathrm{DSTK}(t_{a}, t_{b}) = \left \langle \mathbf{T^{\mathit{a}}}, \mathbf{T^{\mathit{b}}} \right \rangle _{F}= \sum_{\substack{t_{i}^{a}\in S(t^{a})\\ t_{j}^{b}\in S(t^{b})}}\left \langle \mathbf{T^{\mathit{a}}_{\mathit{i}}}, \mathbf{T^{\mathit{b}}_{\mathit{j}}} \right \rangle _{F}\]
This is nothing more than the usual dot product between two vectors, if we flatten the two m × k matrices into two vectors, each with mk components.
26We want to generalize equation (2), and obtain that the product \(\left \langle \mathbf{T^{\mathit{a}}_{\mathit{i}}}, \mathbf{T^{\mathit{b}}_{\mathit{j}}} \right \rangle _{F}\) approximates the following similarity between lexicalized trees:
\[\left \langle \mathbf{T^{\mathit{a}}_{\mathit{i}}}, \mathbf{T^{\mathit{b}}_{\mathit{j}}} \right \rangle _{F} \approx \begin{cases} \left \langle \mathbf{w}_{\mathrm{h}(t_{i}^{a})}, \mathbf{w}_{\mathrm{h}(t_{j}^{b})} \right \rangle \text{ if } \mathrm{s}(t_{i}^{a})= \mathrm{s}(t_{j}^{b}) \\ 0 \text{ otherwise } \end{cases}\]
In other words, whenever two subtrees have the same syntactic structure, we define their similarity as the semantic similarity of their heads (as computed via dot product of the corresponding distributional vectors), when their syntactic structure is different we instead define their similarity to be 0.
27This definition can also be written as:
\[\tag{5} \left \langle \mathbf{T^{\mathit{a}}_{\mathit{i}}}, \mathbf{T^{\mathit{b}}_{\mathit{j}}} \right \rangle _{F} \approx \delta (\mathrm{s}(t_{i}^{a}), \mathrm{s}(t_{j}^{b})) \; \cdot \; \left \langle \mathbf{w}_{\mathrm{h}(t_{i}^{a})}, \mathbf{w}_{\mathrm{h}(t_{j}^{b})} \right \rangle \]
28In order to obtain the above approximation property, we define:
\[\mathbf{T}_{i} = \mathrm{s}(\mathbf{t_{i}}) \otimes \mathbf{w}_{\mathrm{h}(t_{i})} \]
where s(ti) are distributed tree fragment (Zanzotto and Dell’Arciprete 2012a) for the subtree t, \(\mathbf{w}_{\mathrm{h}(t_{i})}\) is the distributional vector of the head of the subtree t and ⊗ denotes the tensor product. In this particular case, the tensor product is equivalent to the matrix \(\mathrm{s}(\mathbf{t_{i}}) \mathbf{w}_{\mathrm{h}(t_{i})}^{\top }\), between a column vector and a row vector.
29Exploiting the following properties of the tensor and Frobenius product:
\[\left \langle \mathbf{a}\: \otimes \: \mathbf{w}, \mathbf{b}\: \otimes \: \mathbf{v} \right \rangle_{F} = \left \langle \mathbf{a}, \mathbf{b} \right \rangle \cdot \left \langle \mathbf{w}, \mathbf{v} \right \rangle\]
we have that Equation (5) is satisfied as:
\[\begin{align*} \left \langle \mathbf{T}_{i}, \mathbf{T}_{j} \right \rangle_{F} &= \left \langle \mathrm{s}(\mathbf{t_{i}}), \mathrm{s}(\mathbf{t_{j})} \right \rangle \cdot \left \langle \mathbf{w}_{\mathrm{h}(t_{i})}, \mathbf{w}_{\mathrm{h}(t_{j})} \right \rangle \\ &\approx \delta (\mathrm{s}({t_{i}}), \mathrm{s}({t_{j})}) \cdot \left \langle \mathbf{w}_{\mathrm{h}(t_{i})}, \mathbf{w}_{\mathrm{h}(t_{j})} \right \rangle \end{align*}\]
30As in the distributed trees, it is possible to introduce a different formulation to compute DST(t). Such formulation has the advantage of being more computationally efficient, and also makes it clear that the process is compositional in nature, because it composes distributional and distributed vector of each node. More specifically, it can be shown that:
\[\mathrm{DST}(t) = \sum_{n \in N}\mathrm{SN}^{\ast} (n)\]
where SN∗ is defined as:
\[\mathrm{SN}^{\ast} (n) = \begin{cases} 0 \text{ if } n \; \text {is terminal} \\ \mathrm{SN} (n) \otimes \mathbf{w_{n}}\text{ otherwise }\end{cases}\]
and S(n) is the same as in equation (3).
31It is possible to show that the overall compositional distributional model DST(t) can be obtained with a recursive algorithm that exploits vectors of the nodes of the tree.
32We actually propose two slightly different versions of our DSTs according to how we produce distributional vectors for words. We have a plain version DST0 when we use distributional vectors wn as they are, and a slightly modified version DST+1 when we use as distributional vectors wn = (1 wn)
33The two CDSM we propose approximate two specific tree kernels belonging to the smoothed tree kernels class. These recursively computes (but, the recursive formulation is not given here) the following general equation:
\[STK(t^{a}, t^{b}) = \sum_{\substack{t_{i}\in S(t^{a})\\ t_{j}\in S(t^{b})}}\omega (t_{i}, t_{j})\]
where ω(ti, tj) is the similarity weight between two subtrees ti and tj. DTSK0 and DSTK+1 approximate respectively the kernels STK0 and STK+1 defined respectively by the following equations for the weights:
\[\omega _{0}(t_{i}, t_{j})= \left \langle \mathbf{w}_{\mathrm{h}(\mathbf{t_{i}})}, \mathbf{w}_{\mathrm{h}(\mathbf{t_{j}})} \right \rangle \cdot \delta (\mathrm{s}({t_{i}}), \mathrm{s}({t_{j})}) \cdot \sqrt{\lambda ^{\left | N(t_{i}) \right | + \left | N(t_{j}) \right |}}\]\[\omega _{+1}(t_{i}, t_{j})= (\left \langle \mathbf{w}_{\mathrm{h}(\mathbf{t_{i}})}, \mathbf{w}_{\mathrm{h}(\mathbf{t_{j}})} \right \rangle + 1) \cdot \delta (\mathrm{s}({t_{i}}), \mathrm{s}({t_{j})}) \cdot \sqrt{\lambda ^{\left | N(t_{i}) \right | + \left | N(t_{j}) \right |}}\]
34Generic settings. We experimented with two datasets: the Recognizing Textual Entailment datasets (RTE) (Dagan, Glickman, and Magnini 2006) and the the Semantic Textual Similarity 2013 datasets (STS) (Agirre et al. 2013). The STS task consists of determining the degree of similarity (ranging from 0 to 5) between two sentences. We used the data for core task of the 2013 challenge data. The STS datasets contains 5 datasets: headlines, OnWN, FNWN, SMT and MSRpar, which contains respectively 750, 561, 189, 750 and 1500 pairs. The first four datasets were used for testing, while all the training has been done on the fifth. RTE is instead the task of deciding whether a long text T entails a shorter text, typically a single sentence, called hypothesis H. It has been often seen as a classification task (see (Dagan et al. 2013)). We used four datasets: RTE1, RTE2, RTE3, and RTE5, with the standard split between training and testing. The dev/test distribution for RTE1-3, and RTE5 is respectively 567/800, 800/800, 800/800, and 600/600 T-H pairs.
35Distributional vectors are derived with DISSECT (Dinu, The Pham, and Baroni 2013) from a corpus obtained by the concatenation of ukWaC (wacky.sslmit.unibo.it), a mid-2009 dump of the English Wikipedia (en.wikipedia.org) and the British National Corpus (www.natcorp.ox.ac.uk), for a total of about 2.8 billion words. We collected a 35K-by-35K matrix by counting co-occurrence of the 30K most frequent content lemmas in the corpus (nouns, adjectives and verbs) and all the content lemmas occurring in the datasets within a 3 word window. The raw count vectors were transformed into positive Pointwise Mutual Information scores and reduced to 300 dimensions by Singular Value Decomposition. This setup was picked without tuning, as we found it effective in previous, unrelated experiments.
36To build our DTSKs and for the two baseline kernels TK and DTK, we used the implementation of the distributed tree kernels2. We used: 1024 and 2048 as the dimension of the distributed vectors, the weight λ is set to 0.4 as it is a value generally considered optimal for many applications (see also (Zanzotto and Dell’Arciprete 2012a)).
37The statistical significance, where reported, is computed according to the sign test.
38Direct correlation settings. For the direct correlation experiments, we used the RTE data sets and the testing sets of the STS dataset (that is, headlines, OnWN, FNWN, SMT). We computed the Spearman’s correlation between values produced by our DSTK0 and DSTK+1 and produced by the standard versions of the smoothed tree kernel, that is, respectively, STK0 and STK+1. We obtained text fragment pairs by randomly sampling two text fragments in the selected set. For each set, we produced exactly the number of examples in the set, e.g., we produced 567 pairs for RTE1 dev, etc..
Table 1. Spearman’s correlation between Distributed Smoothed Tree Kernels and Smoothed Tree Kernels
|
|
RTE1
|
RTE2
|
RTE3
|
RTE5
|
headl
|
FNWN
|
OnWN
|
SMT
|
STK0 vs DSTK0
|
1024
|
0.86
|
0.84
|
0.90
|
0.84
|
0.87
|
0.65
|
0.95
|
0.77
|
2048
|
0.87
|
0.84
|
0.91
|
0.84
|
0.90
|
0.65
|
0.96
|
0.77
|
STK+1 vs DSTK+1
|
1024
|
0.81
|
0.77
|
0.83
|
0.72
|
0.88
|
0.53
|
0.93
|
0.66
|
2048
|
0.82
|
0.78
|
0.84
|
0.74
|
0.91
|
0.56
|
0.94
|
0.67
|
- 3 Correlations are obtained with the organizers’ script
39Task-based settings. For the task-based experiments, we compared systems using the standard evaluation measure and the standard split in the respective challenges. As usual in RTE challenges the measure used is the accuracy, as testing sets have the same number of entailment and non-entailment pairs. For STS, we used MSRpar as training, and we used the 4 test sets as testing. We compared systems using the Pearson’s correlation as the standard evaluation measure for the challenge3. Thus, results can be compared with the results of the challenge.
40As classifier and regression learner, we used the java version of LIBSVM (Chang and Lin 2011). In the two tasks we used in a different way our DSTs (and the related STKs) within the learners. In the following, we refer to instances in RTE or STS as pairs p = (ta, tb) where ta and tb are the two parse trees for the two sentences a and b for STS and for the text a and the hypothesis b in RTE.
41We will indicate with K(p1, p2) the final kernel used in the learning algorithm, which takes as input two training instances, while we will use κ to denote either any of our DSTK (that is, 𝜅(x,y) = ⟨DST(x), DST(y)⟩) or any of the standard smoothed tree kernels (that is, 𝜅(x,y) = STK(x,y)).
42In STS, we encoded only similarity feature between the two sentences. Thus, we used the kernel defined as:
\[K (p_{1},p_{2}) = (\kappa (t_{1}^{a}, t_{1}^{b})\cdot \kappa (t_{2}^{a}, t_{2}^{b}) + 1 )^{2}\]
43In RTE, we followed standard approaches (Dagan et al. 2013; Zanzotto, Pennacchiotti, and Moschitti 2009), that is, we exploited a model with only a rewrite rule feature space (RR). The model use our DSTs and the standard STKs in the following way as kernel function:
\[RR (p_{1},p_{2}) = \kappa (t_{1}^{a}, t_{2}^{a})+ \kappa (t_{1}^{b}, t_{2}^{b})\]
44Finally, to investigate whether our DSTKs behave better than purely structural models, we experimented with the classical tree kernel (TK) (Collins and Duffy 2002) and the distributed tree kernel (DTK) (Zanzotto and Dell’Arciprete 2012a). Again, these kernels are used in the above models as κ(ta,tb).
Table 2. Task-based analysis: Correlation on Semantic Textual Similarity
|
|
|
STS
|
|
|
|
headl
|
FNWN
|
OnWN
|
SMT
|
Average
|
DTK
|
0.448
|
0.118
|
0.162
|
0.301
|
0.257
|
TK
|
0.456
|
0.145
|
0.158
|
0.303
|
0.265*
|
DSTK0
|
0.491
|
0.155
|
0.358
|
0.305
|
0.327†
|
STK0
|
0.490
|
0.159
|
0.349
|
0.305
|
0.325*
|
DSTK+1
|
0.475
|
0.138
|
0.266
|
0.304
|
0.295
|
STK+1
|
0.478
|
0.156
|
0.259
|
0.305
|
0.299*
|
† is different from DTK, TK, DSTK+1, and STK+1 with a stat.sig. of p > 0.1; * the difference between the kernel and its distributed version is not stat.sig.
45Table 1 reports the results for the correlation experiments. We report the Spearman’s correlations over the different sets (and different dimensions of distributed vectors) between our DSTK0 and the STK0 (first two rows) and between our DSTK+1 and the corresponding STK+1 (second two rows). The correlation is above 0.80 in average for both RTE and STS datasets in the case of DSTK0 and the STK0. The correlation between DSTK+1 and the corresponding STK+1 is instead a little bit lower. This depends on the fact that DSTK+1 is approximating the sum of two kernels the TK and the STK0 (as STK+1 is the sum of the two kernels). Then, the underlying feature space is bigger with respect to the one of STK0 and, thus, approximating it is more difficult. The approximation also depends on the size of the distributed vectors. Higher dimensions yield to better approximation: if we increase the distributed vectors dimension from 1024 to 2048 the correlation between DSTK+1 and STK+1 increases up to 0.80 on RTE and up to 0.77 on STS. This direct analysis of the correlation shows that our CDSM are approximating the corresponding kernel function and there is room of improvement by increasing the size of distributed vectors.
46Task-based experiments confirm the above trend. Table 2 and Table 3, respectively, report the correlation of different systems on STS and the accuracies of the different systems on RTE. Our CDSMs are compared against a baseline system (DTK) in order to understand whether in the specific tasks our more complex model is interesting, and against, again, the systems with the corresponding smoothed tree kernels in order to explore whether our DSTKs approximate systems based on STKs. For all this set of experiment we fixed the dimension of the distributed vectors to 1024.
47Table 2 is organized as follows: columns 2-6 report the correlation of the STS systems based on syntactic/semantic similarity. Comparing rows in this columns, we can discover that DSTK0 and DSTK+1 behave significantly better than DTK and that DSTK0 behave better than the standard TK. Thus, our DSTKs are positively exploiting distributional semantic information along with structural information. Moreover, both DSTK0 and DSTK+1 behave similarly to the corresponding models with standard kernels STKs. Results in this task confirm that structural and semantic information are both captured by CDSMs based on DSTs.
48Table 3 is organized as follows: columns 2-6 report the accuracy of the RTE systems based on rewrite rules (RR).
Table 3. Task-based analysis: Accuracy on Recognizing Textual Entailment
|
|
RTE
|
|
|
|
|
RTE1
|
RTE2
|
RTE3
|
RTE5
|
Average
|
DTK
|
0.533
|
0.515
|
0.516
|
0.530
|
0.523
|
TK
|
0.561
|
0.552
|
0.531
|
0.54
|
0.546
|
DSTK0
|
0.571
|
0.551
|
0.547
|
0.531
|
0.550†
|
STK0
|
0.586
|
0.563
|
0.538
|
0.545
|
0.558*
|
DSTK+1
|
0.588
|
0.562
|
0.555
|
0.541
|
0.561†
|
STK+1
|
0.586
|
0.562
|
0.542
|
0.546
|
0.559*
|
† is different from DTK and TK wiht a stat.sig. of p > 0.1; * the difference between the kernel and its distributed counterpart is not statistically significant.
49Results on RTE are extremely promising as all the models including structural information and distributional semantics have better results than the baseline models with a statistical significance of 93.7%. As expected (Mehdad, Moschitti, and Zanzotto 2010), STKs behave also better than tree kernels exploiting only syntactic information. But, more importantly, our CDSMs based on the DSTs are behaving similarly to these smoothed tree kernels, in contrast to what reported in (Zanzotto and Dell’Arciprete 2011). In (Polajnar, Rimell, and Kiela 2013), it appears that results of the (Zanzotto and Dell’Arciprete 2011)’s method are comparable to the results of STKs for STS, but this is mainly due to the flattening of the performance given by the lexical token similarity feature which is extremely relevant in STS. Even if distributed tree kernels do not approximate well tree kernels with distributed vectors dimension of 1024, our smoothed versions of the distributed tree kernels approximate correctly the corresponding smoothed tree kernels. Their small difference is not statistically significant (less than 70%). The fact that our DSTKs behave significantly better than baseline models in RTE and they approximate the corresponding STKs shows that it is possible to positively exploit structural information in CDSMs.
50Distributed Smoothed Trees (DST) are a novel class of Compositional Distributional Semantics Models (CDSM) that effectively encode structural information and distributional semantics in tractable rank-2 tensors, as experiments show. The paper shows that DSTs contribute to close the gap between two apparently different approaches: CDSMs and convolution kernels. This contribute to start a discussion on a deeper understanding of the representation power of structural information of existing CDSMs.