1The translation of speech segments into their textual content in a different language is referred in literature as the task of speech-to-text translation (ST). ST involves two logical sub-tasks: automatic speech recognition (ASR), i.e. the modality conversion from the source audio into text, and machine translation (MT), i.e. the translation of the transcribed text into the target language. As a natural consequence of this logical division, the first ST architectures were based on a pipeline (or cascade) approach that combined an ASR and an MT model, where the output of the ASR system constituted the input of the MT system (Stentiford and Steer 1988; Waibel et al. 1991).
2The recent rise of deep neural networks (LeCun, Bengio, and Hinton 2015) not only revolutionised the ASR and MT fields but also suggested a direct (or end-to-end) approach to ST, in which a single deep network performs the whole task at once, dealing both with the modality and language transformation (Bérard et al. 2016; Weiss et al. 2017). This paradigm has been proposed to overcome the limitations of the cascade solution, namely: i) the impact of ASR errors on the MT system ability to understand the content – as MT has no cues to recover from them – leading to error propagation (Ruiz and Federico 2014), ii) the information loss (e.g. prosody) caused by the mediated access (via the transcript) to the input audio, and iii) the higher latency introduced by the sequential inference of the two models with respect to a single model architecture.
3Despite the above-mentioned advantages, direct models have not yet substituted the cascade solutions in industrial/real-world applications. The main reason lies in the lower quality of the generated translations, a performance gap that has been significantly reduced (if not closed) only recently, with direct models reaching comparable (sometimes better) quality of state-of-the-art cascade solutions and producing outputs that are indistinguishable for the end user (Bentivogli et al. 2021). The initial gap was mainly caused by the scarcity of parallel audio-translation corpora for ST, while plenty of ASR and MT parallel data are available. Research has overcome this shortage by means of data augmentation techniques (Jia et al. 2019; Bahar et al. 2019; Nguyen et al. 2020) and by transferring to the ST models the knowledge acquired by ASR/MT models trained on the two data-rich sub-tasks (Weiss et al. 2017; Anastasopoulos and Chiang 2018; Bérard et al. 2018; Bansal et al. 2019; Liu et al. 2019; Bahar, Bieschke, and Ney 2019). Along this latter direction, while the weights of an ASR model are commonly used to initialize the ST encoder (Bahar, Bieschke, and Ney 2019), the initialization of the decoder with the weights of an MT model has not consistently proved to be beneficial (Bahar, Bieschke, and Ney 2019; Gaido, Di Gangi, et al. 2020; Inaguma et al. 2020). As an alternative to decoder initialization, another method to transfer knowledge from MT has been proposed and successfully exploited (Liu et al. 2019): knowledge distillation (KD).
4Following this promising line of research, in this paper we extend in several ways the analysis by (Gaido, Gangi, et al. 2020) on KD for direct ST models. Specifically, we complement the analysis on the specific knowledge learned by the ST model as well as of the drawbacks introduced by KD in rich data conditions by validating that the previous findings generalise to different language pairs. As such, we show on English\(\rightarrow\){French, German, Italian} that distilling knowledge at word-level yields the highest performance but often prevents the ST model from producing the correct gender of the speaker and to translate long utterances made of more than one sentence. Within this richer evaluation setting, we also confirm that a further fine-tuning without KD solves these issues retaining the quality improvements brought by KD. Finally, we analyse the correlation and dependency of the quality of ST student models with the quality of their MT teachers. Our experiments show that, regardless of the KD method adopted, a better MT teacher always leads to a better ST student, but the gains become lower at higher MT quality scores.
5KD has been introduced to transfer knowledge from a big model into a small, compressed one (Hinton, Vinyals, and Dean 2015). The goal is to have a small model – named student in the KD learning procedure – that performs similarly to its big counterpart – named teacher – while being usable on low-resource devices (e.g. mobile phones). Specifically, the student is trained to learn to mimic the probability distribution of the teacher when processing the same input. This is obtained by using the probabilities generated by the teacher as reference when training the student, instead of the usual reference distribution, in which the correct label is assigned probability 1 and all the others 0. In practice, this means that the student is not trained to optimize the cross entropy loss function, but to minimize the distance between the probability distribution it generates and the one generated by the teacher. The distance between the two probability distributions is computed with the KL-divergence (Kullback and Leibler 1951), which is formally defined as:
\[\tag{1}KL(P||Q) = \sum P(x) * log \frac{P(x)}{Q(x)} \label{eq:kl_div}\]
which measures the closeness of Q to P, i.e. how much information is lost when using Q to approximate P. In our case, defining \(p(y|x)\) as the probability distribution over the target labels Y generated by the teacher for the input \(x\) and \(q(y|x)\) the probability distribution generated by the student, Eq. 1 becomes:
\[\tag{2}\begin{aligned} KL(P||Q) = & \sum_{x \in X} \sum_{y \in Y} p(y|x) * log \frac{p(y|x)}{q(y|x)} \\ &= \sum_{x \in X} \sum_{y \in Y} p(y|x) * log(p(y|x)) - \sum_{x \in X} \sum_{y \in Y} p(y|x) * log(q(y|x)) \end{aligned} \label{eq:kl_specific}\]
6Since the first term does not depend on the student output, the loss function that is optimized in knowledge distillation omits it and Eq. 2 becomes:
\[\tag{3}L(X) = - \sum_{x \in X} \sum_{y \in Y} p(y|x) * log(q(y|x)) \label{eq:kl_loss}\]
7Notice that if we replace the teacher output distribution with the real target distribution, which is 1 for the correct target label \(y'\) and 0 for all the other target labels, we obtain the standard cross entropy loss:
\[\tag{4}L(X) = - \sum_{x \in X} log(q(y'|x)) \label{eq:cross_entropy}\]
- 1 Notice that this is true only for T > 1. If T < 1, the distribution is sharpened.
8One of the reasons behind the success of KD has been individuated in the capability of the student model to learn from the dark knowledge (Hinton, Vinyals, and Dean 2015) of the teacher, defined as the information present in the teacher model that is not exposed by looking only at the final output, which considers only the most likely label. By learning to mimic the behavior of the teacher also for the less likely labels, the student is indirectly exposed to such internal knowledge. To increase the relevance and contribution of the dark knowledge, (Hinton, Vinyals, and Dean 2015) introduce the hyper-parameter temperature (T) that smooths1 the output distribution. With T, in particular, the softmax operation converting the logits zi into the corresponding probabilities pi becomes:
\[\tag{5}p_i = \frac{e^{z_i / T}}{\sum (e^{z_i / T})} \label{eq:temperature}\]
9When it is not stated otherwise, T is assumed to be set to 1, which corresponds to the standard softmax operation without any smoothing factor.
10KD has been proposed in the context of classification, where one label has to be predicted for every input. However, ST is a sequence-to-sequence task, so the output is not a single label but a sequence of variable length. Therefore, KD cannot be applied in its original form. For sequence-to-sequence tasks, (Kim and Rush 2016) proposed three different techniques to distill knowledge at sequence level: i) word-level KD, ii) sequence-level KD, and iii) sequence interpolation.
11Word-level KD (henceforth Word-KD) is the most similar method to the original KD definition introduced in Section 2.1. In this case, the KL divergence between the teacher and student outputs is computed for every element (time-step) of the target sequence and the final distance is the sum of the divergences over all the elements (time-steps) of the sequence. Hence, Eq. 3 becomes:
\[\tag{6}L(X) = - \sum_{x \in X} \sum_{t \in [1, len(X)]} \sum_{y \in Y} p(y_{t}|x, y_{0}, ..., y_{t-1}) * log(q(y_{t}|x, y_{0}, ..., y_{t-1})) \label{eq:wordkd}\]
12Sequence-level KD (henceforth Seq-KD) consists in replacing the target reference (in our case the translation provided in the training corpora) with the sequence of tokens (in our case the automatic translation) generated by the teacher model. The loss function can be either the cross entropy or one of its variants, such as the label smoothed cross entropy.
13Sequence interpolation (henceforth Seq-Inter) relies as well on the predictions of the teacher model. In this case, though, the N most likely sequences resulting from the beam search are re-scored and the one with the highest similarity with the ground truth is chosen as surrogate reference. In the case of textual outputs, such as in MT and ST, the similarity with the ground truth is computed using the BLEU score (Papineni et al. 2002).
14Finally, Word-KD can be combined with the other two methods, resulting in two additional possible methods: Word-KD+Seq-KD and Word-KD+Seq-Inter.
- 2 ST does not only involve translating from a source to a target language, but also recognising the s (...)
15KD has been applied to direct ST for motivations different from the originary model compression. (Liu et al. 2019) train a direct ST model with a MT teacher to transfer knowledge from the easier MT task,2 in which models obtain better performance, and hence improve the quality of the resulting ST student model. (Gaido, Di Gangi, et al. 2020; Papi et al. 2021), instead, leverage KD from an MT model trained on a large amount of data to distill into the ST student model information that such a model could not directly access because of the different input modality. All these works employ the Word-KD method. (Jia et al. 2019), instead, generate synthetic data by translating the transcripts of ASR corpora with an MT model. Although presented as a data augmentation method, this can also be interpreted as an application of the Seq-KD method, although the benefits of KD cannot be isolated from those due to the additional data.
16In this section, we describe the architectures and the parameters that were used in our experiments. For the sake of further easing the reproducibility of our work and facilitate building up on our work, the code is open source and can be found at https://github.com/mgaido91/FBK-fairseq-ST.
17Our MT models are plain Transformer (Vaswani et al. 2017) models with 6 Transformer encoder layers and 6 Transformer decoder layers. In the experiments discussed in Section 4, we use a small model with 512 hidden features and 8 attention heads in all attention layers and 1,024 hidden features in the feed-forward networks (FFNs) of the Transformer layers. In the other experiments, as they involve a larger amount of data, all these hyper-parameters are doubled.
18In our experiments, we use an architecture based on Transformer with some adaptations for the input modality (audio) that is different from that (text) for which Transformer has been introduced (Dong, Xu, and Xu 2018; Di Gangi et al. 2019). One of the key challenges of using Transformer for speech is represented by the higher length of the input sequence (usually ~10 times longer than the corresponding textual representation), because Transformer’s memory requirements grow quadratically with the input sequence length. Thus, to avoid out-of-memory issues and enable trainings with speech sources, the input features are processed with two 2D convolutions, each having stride 2, that reduce the sequence length by a factor of four (Bérard et al. 2018; Di Gangi et al. 2019). This sequence is then fed to the Transformer encoder, whose self-attention layers are modified by biasing the attention matrix toward close elements with a logarithmic distance penalty (Di Gangi et al. 2019).
19In the experiments of Section 4, we use a small model, with 256 hidden features and 4 attention heads in all attention layers and 1,024 hidden features in the FFNs of Transformer layers. The number of Transformer encoder layers is 8 and the number of Transformer decoder layers is 6.
20In all other experiments, as we train on larger corpora, following (Gaido, Di Gangi, et al. 2020) we use 11 Transformer encoder layers and 4 Transformer decoder layers for our ST models, while the ASR models used for the pre-training have 8 Transformer encoder layers and 6 Transformer decoder layers. When loading the pre-trained encoder layers, the additional 3 layers of the ST model are randomly initialized and behave as adapter layers (Jia et al. 2019; Bahar, Bieschke, and Ney 2019). Moreover, we increase the size of the models that have 512 hidden features and 8 attention heads in the attention layers and 2,048 hidden features in the FFNs.
21In all our trainings we choose Adam (Kingma and Ba 2015) using betas (0.9, 0.98) as optimizer and, in case the loss is not the KL-divergence, we use label smoothing (Szegedy et al. 2016) with smoothing factor 0.1. For ASR, the objective function also includes a Connectionist Temporal Classification (CTC) loss (Graves et al. 2006), which is summed to the cross entropy. The CTC is computed on the encoder output (with the transcripts as target), and its role is only to aid model convergence and improve the final quality of the model (Kim, Hori, and Watanabe 2017). In all trainings, the learning rate is increased linearly for 4,000 updates, up to the value of 5 * 10^{-3}, and then decays with the inverse square root policy. In the fine-tunings, instead, the learning rate is kept fixed and is 1 * 10^{-4}. The dropout is set to 0.2.
- 3 The value of the update frequency hyper-parameter depends on the number of GPUs used in the trainin (...)
22Each mini-batch contains 8 samples but updates are delayed to reach an overall batch size of 512.3 All our models are trained on K80 GPUs with 12 GB of RAM.
23The input audio is pre-processed by extracting 40 features using Mel filter bank with overlapping windows of 25 ms and 10 ms step size. The extracted features are then normalized per speaker. This pre-processing is performed with XNMT (Neubig et al. 2018). Samples resulting in more than 2,000 vectors (i.e. longer than 20 s) are discarded to avoid excessive memory requirements at training time. Text, instead, is tokenized after punctuation normalization with Moses (Koehn et al. 2007) and segmented into sub-word units using 8,000 BPE (Sennrich, Haddow, and Birch 2016) merge rules, as suggested in (Di Gangi et al. 2020). The BPE merge rules are jointly learned on the two languages of the MT dataset.
24As per (Gaido, Gangi, et al. 2020), we compared the three KD methods in a controlled setting, using only the data from Librispeech (Kocabiyikoglu, Besacier, and Kraif 2018), which contains 132,553 (audio, transcript, translation) triplets for the English\(\rightarrow\)French language direction. The MT teacher model is trained on the (transcript, translation) pairs, the ASR model used to initialize the ST encoder on the (audio, transcript) pairs, and the ST model on the (audio, translation) pairs. Within this controlled setting, the benefits brought by KD on the ST students are not due to the indirect exposure to additional MT data, but to the easiness to learn by extracting knowledge from the better performing MT teacher.
25The definition of the Word-KD method exposed in Section 2.2 implies that the whole output distribution of the teacher model is compared with the whole output distribution of the student. In practice, this is highly inefficient since pre-computing and storing the output probabilities for each token of each sequence requires huge storage capacity (e.g. with ~100,000 samples of average length 100 and 8,000 labels in the output distribution, we would need to store 80,000,000,000 floats, corresponding to more than ~320 GB of storage). On the other hand, re-computing the teacher target label at every iteration entails a forward pass on the teacher network for every input batch, leading to a significant increase in the training time.
26Considering that the softmax operation produces peaky outputs that tend to concentrate most of the probability distribution across up to 3-4 tokens, we hypothesize that truncating the output distribution and reducing the loss computation to only the K most likely labels can speed up the training without compromising the quality of the resulting model.
27Table 1 reports the results for different K values. As the output is required to be a valid probability distribution, after the truncation the probabilities are re-scaled to sum up to 1. As per the formulated hypothesis based on the softmax behavior, limiting the KL-divergence computation to a small number of labels does not impact performance. On the contrary, the best result is obtained with 8 labels. Indeed, predictions with very low probabilities are likely to be uninformative and noisy and do not carry useful information about the internal knowledge of the teacher. In light of these results, hereinafter all experiments with Word-KD assume that the KL-divergence is only computed setting K=8, i.e. on the top 8 output labels of the teacher distribution.
Table 1: Results (BLEU score) with different K values, where K is the number of tokens considered for Word-KD.
Top K
|
BLEU
|
4
|
16.43
|
8
|
16.50
|
64
|
16.37
|
1024
|
16.34
|
28As mentioned in Section 2.1, KD has been proposed with an hyper-parameter, the temperature, that controls the smoothness of the output distribution and increases/decreases the importance of the so-called dark knowledge. In our work, we tested multiple values aimed to smooth the probability distributions and favor the learning of such dark knowledge. According to the results shown in Table 2, the best BLEU score is achieved by setting the temperature to 1.0, which means by training without any smoothing factor. This finding suggests that ST models – as they need to learn a more complex task – have a limited capacity with respect to MT models and therefore focusing only on the mode of the MT model distributions is more convenient. Accordingly, in the following experiments we do not apply smoothing, by setting to 1.0 the temperature hyper-parameter.
Table 2: Results with different temperatures (T). All differences are statistically significant with p=0.05.
T
|
BLEU
|
1.0
|
16.50
|
4.0
|
16.11
|
8.0
|
14.27
|
29We now compare the standard cross entropy loss – which we consider our baseline – with the KD methods described in Section 2.2 and summarized in Figure 1. The comparison is also carried out considering different combinations of such techniques. These can be performed in two ways: by applying both techniques together in the same training, or by first training with one technique and then fine-tuning the resulting ST model with the other. We also experimented with fine-tuning (FT) without KD after the application of a KD method. The results are reported in Table 3.
Figure 1
Illustration of the KD methods.
Table 3: Results of the small model on Librispeech with different KD methods and combining them in a single training or in consecutive trainings through a fine-tuning (FT). The “\(\dagger\)” symbol indicates that improvements over Word-KD are statistically significant with p=0.05.
|
BLEU
|
Baseline
|
9.4
|
Word-KD
|
16.5
|
Seq-KD
|
13.4
|
Seq-Inter
|
13.3
|
Seq-KD + Word-KD
|
15.7
|
Word-KD + FT Seq-KD
|
16.7
\(\dagger\)
|
Seq-KD + FT Word-KD
|
16.8
\(\dagger\)
|
Word-KD + FT w/o KD
|
16.8
\(\dagger\)
|
30Looking at the Baseline and the three KD techniques, we can conclude that all KD methods improve significantly over the Baseline, with gains that range from 3.9 to 7.1 BLEU points. Moreover, Word-KD is a clear winner among them, with a 3.1 BLEU margin over Seq-KD. Combining Word-KD and Seq-KD in a single training (Seq-KD + Word-KD) does not bring advantages; conversely, the result is worse (-0.8 BLEU) than the training with only Word-KD. The quality of the resulting model is instead improved when Word-KD and Seq-KD are applied sequentially, i.e. when a first training with either of them is followed by a fine-tuning with the other (see Word-KD + FT Seq-KD and Seq-KD + FT Word-KD). Both solutions yield small gains of 0.2-0.3 BLEU points over the Word-KD method alone. The same result is also obtained when training on Word-KD and fine-tuning on the ground truth references with label smoothed cross entropy, i.e. without KD (Word-KD + FT w/o KD).
31Although they are in line with previous work on KD for ST from MT (Liu et al. 2019), our results do not confirm the trends shown in (Kim and Rush 2016), where KD is used to compress MT models. Indeed, in our case Word-KD is a clear winner. This suggests that the effectiveness of different KD methods in a sequence-to-sequence scenario varies depending on the peculiarities of the task.
- 4 Although large ST corpora are not available, plenty of ASR and MT data can be collected to build mo (...)
32Once defined the best KD practice, we validate its effects in a realistic, high-resource scenario, in which large parallel MT corpora are available, together with a considerable amount of speech hours with the corresponding transcripts (ASR data).4 In this case, we train our models on three language pairs: English\(\rightarrow\)French (en-fr), English\(\rightarrow\)German (en-de) and English\(\rightarrow\)Italian (en-it).
33The MT data is a selection of the OPUS corpora (Tiedemann 2016), filtered using the cleaning utilities of ModernMT (Bertoldi et al. 2017). OPUS contains parallel sentences automatically extracted from the web. As such, their nature is very different from the ASR and ST data, which is based on recorded sessions (TED or European Parliament talks) or book/manual readings and whose utterances can contain more than one sentence. The ASR data include How2 (Sanabria et al. 2018), Librispeech (Kocabiyikoglu, Besacier, and Kraif 2018), Mozilla Common Voice,5 TED-LIUM 3 (Hernandez et al. 2018), and MuST-C (Cattoni et al. 2021), which also constitutes our ST corpus together with Europarl-ST (Iranzo-Sánchez et al. 2020). Both ASR and ST trainings augment source audio with SpecAugment (Park et al. 2019), using 0.5 as probability, 13 as frequency masking pars, 20 as time masking pars, 2 as frequency masking num, and 2 as time masking num.
34The ST training is carried out in three phases: i) a training with Word-KD on the ASR corpora, whose transcripts are translated into the target language with the MT model (i.e. a Word-KD + Seq-KD training on the ASR data); ii) a fine-tuning with Word-KD on the ST corpora; iii) a fine-tuning without KD, as per the best training method in our experiments in Section 4.2. The ST encoder is initialized with that of an ASR model trained on the above-listed corpora and scoring 10.2 WER on the MuST-C test set.
35Table 4 reports the scores of the MT teachers, the ST students after the first two training steps (those including Word-KD), and the final ST score after the last fine-tuning without KD. These results emphasize the importance of the last fine-tuning without KD to obtain state-of-the-art results. Indeed, we can see that in the real scenario, where there is a significant mismatch between the MT and the ST training data, distilling the MT knowledge brings information and benefits that mostly emerge in the overall scores after the final fine-tuning. Our hypothesis is that the additional useful knowledge is counterbalanced by the negative effect of learning patters that are valid only for the MT training data. In the following, we study what these spurious patterns and negative effects are.
Table 4: BLEU scores of the MT teachers and ST students on the MuST-C tst-COMMON set for English\(\rightarrow\)French,German,Italian.
Language Pair
|
MT Teacher
|
ST after Word-KD (step ii))
|
ST after fine-tuning (step iii))
|
en-de
|
32.1
|
25.8
|
27.6
|
en-fr
|
46.0
|
36.5
|
40.3
|
en-it
|
32.7
|
22.8
|
27.7
|
- 6 The adoption of physical cues can lead to reductionist gender classifications (Zimman 2020) and be (...)
36One possible explanation of the efficacy of the fine-tuning on the ST task after the Word-KD training is the consideration that the ST input (audio) contains information that is not present in the MT input (the corresponding transcript). As an example, the sentence “I am a student” can be translated in Italian either as “Sono uno studente” or as “Sono una studentessa” depending on the gender of the speaker. As this information is completely missing in the textual English input, a MT model is likely to produce the more frequent masculine forms with a representational harm for women (Savoldi et al. 2021), while in the audio the speaker pitch can be used as a gender cue to disambiguate the correct form. Although in general biological features should not be considered as gender cues,6 our dataset (MuST-C) contains a strong correlation between speakers’ vocal characteristics and gender forms in the reference translations, so ST models can learn and leverage this gender cue in our setting.
37We validate the hypothesis by testing our models on the Category 1 of MuST-SHE (Bentivogli et al. 2020), which contains (audio, translation) pairs in which gender-marked terms related to the speaker are annotated to evaluate system’s ability to produce correct gender forms in the translation. As a baseline, we report both the ST system developed by (Bentivogli et al. 2020) – where target text is represented at character level – and the BPE-based system by (Gaido et al. 2021), as they demonstrated that target-text segmentation is an important factor for systems’ ability to translate gender and our systems segment target text with BPE, as this text segmentation method leads to the best translation quality. We measure the ability in translating gender with gender accuracy (Gaido, Savoldi, et al. 2020), i.e. the percentage of correct gender realizations among the words produced by the system and annotated in MuST-SHE.
38The results are reported in Table 5 for two language pairs English\(\rightarrow\)French/Italian. First, we can notice that fine-tuning on ST data indeed improves gender accuracy of the feminine forms from 20.9-26.9% to 33.6-32.6% respectively on en-it and en-fr, reducing the bias towards generating masculine forms. Second, the gap with a BPE-based ST system (Base BPE ST) is closed (en-it – 33.6% vs 33.2%) or becomes small (en-fr – 32.3% vs 37.2%), so the fine-tuning seems to completely solve the limitation of the ST student compared to a normal ST system. The gap with the ST systems by (Bentivogli et al. 2020) is still large (33.6% vs 49.5% on en-it, 32.3% vs 46.5% on en-fr), but it is motivated by the different text segmentation (char vs BPE). The study of hybrid solutions that go beyond the trade-off between the translation quality of BPE and the gender accuracy of char-based segmentation is left as topic of other works (Gaido et al. 2021) and future research.
Table 5: BLEU score and Gender Accuracy on Category 1F (female speakers) and 1M (male speakers) of the MuST-SHE test set.
|
BLEU
|
Female Gender Acc.
|
Male Gender Acc.
|
en-it
|
|
|
|
Base Char ST (Bentivogli et al. 2020)
|
21.5
|
49.5%
|
87.2%
|
Base BPE ST (Gaido et al. 2021)
|
21.8
|
33.2%
|
88.5%
|
MT
|
33.6
|
16.3%
|
88.5%
|
Seq-KD + Word-KD + FT Word-KD
|
23.6
|
20.9%
|
84.9%
|
+ FT w/o KD
|
27.5
|
33.6%
|
80.5%
|
en-fr
|
|
|
|
Base Char ST (Bentivogli et al. 2020)
|
27.9
|
46.3%
|
86.2%
|
Base BPE ST (Gaido et al. 2021)
|
25.9
|
37.2%
|
75.4%
|
MT
|
39.6
|
16.2%
|
89.6%
|
Seq-KD + Word-KD + FT Word-KD
|
32.0
|
26.9%
|
79.4%
|
+ FT w/o KD
|
34.3
|
32.3%
|
79.6%
|
39All in all, the experiments show that the final fine-tuning mitigates the additional gender bias introduced by distilling knowledge from an MT teacher. However, the better gender translation alone does not explain the huge fine-tuning gains. As such, the next section describes the other negative effects of KD, detected via a manual analysis.
40We conducted a manual analysis on the en-it outputs, as en-it shows the highest gain (+4.9 BLEU, while en-fr has a +3.8 BLEU and en-de a +1.8 BLEU improvement – see Table 4). In particular, we selected and inspected the samples with the highest TER (Snover et al. 2006) gains after fine-tuning. This analysis revealed two main types of output improvements.
41Avoid Truncation. The ST student often generates only the first sentence of an utterance and terminates the generation after it, regardless of whether the utterance really contains a single sentence or more than one. In this second case, hence, the output turns out to be truncated. Most likely, the root cause can be attributed to the nature of the data the MT teacher is trained on: indeed, MT corpora contain mostly parallel sentences and rarely a sample contains more than one sentence. As such, the MT teacher (and, in turn, its ST student) learn to terminate the sentence after the dot. Fine-tuning on the ST task, however, solves the issue: upon manual inspection, none of the outputs of the fine-tuned model exhibits truncations.
42Verbal Tense and Lexical Choices. The ST student often chooses verbal tenses that are more common and less accurate. For instance, “That meant I was going to be on television” has been translated by the ST student as “Questo significava che stavo andando in tv”. Although it might be considered acceptable in a colloquial scenario, this translation is grammatically wrong. The fine-tuned model, instead, produces the correct translation with the grammatically-correct verbal tense “Questo significava che sarei andata in televisione”. Similarly, in some cases the ST student prefers common, generic words. For instance, “She has taken a course in a business school, and she has become a veterinary doctor” should be translated as “Ha seguito un corso in una scuola di business, ed è diventata una veterinaria”. However, the ST student produces lezione (lesson) instead of corso and economia (economics) instead of scuola di business. After fine-tuning, the models uses the correct terms corso and business school. Though important in terms of final score, these improvements may be also considered as an adaptation to a different domain and linguistic style (less colloquial), mostly due to the domain mismatch between the MT training data (web-crawled sentence pairs) and the ST data (TED talks).
43As mentioned, the fine-tuning enhancements are mostly adaptations to the ST data and domain, which have peculiarities that differentiate them for the MT corpus used to train the MT teacher. This explains also the reason why the gains obtained with the fine-tuning are smaller in Section 4.2, where the MT and ST data coincide.
44So far we analyzed what the ST student learns from the MT teacher. However, we have not yet addressed the question: how much does the ST student learn from the MT teacher? How important is the quality of the MT teacher for the ST student quality? To answer these questions, we experimented using MT teachers of different quality (controlled by adding/removing data) to train ST students on the MuST-C en-it section, the same used in our previous analysis. We tested both Word-KD and Seq-KD to understand whether the quality of the teacher is a factor to be considered when choosing the KD method, e.g. whether with low-performing teachers one method is preferable, while with strong teachers the other one is superior.
45We consider four teachers with different quality levels and the resulting teacher quality is controlled by sampling the training data. In particular, the best teacher (scoring 32.7 BLEU) is trained on the whole Opus corpus (60M sentence pairs). Then 10M, 1M and 250K (the size of the MuST-C dataset) sentences are sampled to define the training sets for the other three teachers, ensuring that all the sentences included in one training set are also present in the bigger datasets. The teachers trained on these smaller datasets score respectively 30.1, 26.1, and 20.3 BLEU. Unsurprisingly, the score of the MT system trained on the MuST-C dataset (28 BLEU) is significantly higher than the results of the MT models trained on a similar amount of out-of-domain data, and we need to increase by 40 times the size of the training data to obtain better scores. Although the scores are relatively low, this represents a normal working condition when using KD as a source of potentially useful external knowledge, as MT models are usually trained on large generic training corpora.
46Looking at Figure 2, first we can confirm the intuition that a better teacher leads to a better student, although the students’ training set is the same and the margin with the teacher is huge even with the worst teacher (+3.7 BLEU). Second, we can notice that the student is able to only partially learn the additional knowledge of the teacher: the gap between the MT teacher and the ST student’s quality increases with the teacher quality and the ST student BLEU score has not a linear dependency with the teacher’s BLEU, as the benefits become smaller at higher BLEU scores (the Word-KD student gains only 0.3 BLEU when the teacher improves from 30.1 to 32.7). We can conclude that the student is able to learn only part of the teacher’s knowledge and the lower scores are not only due to a lower capacity of the student architecture, since the student has a large margin of improvement even with bad teachers, but improves significantly with a better teacher.
Figure 2
ST student performance (y axis – BLEU score) according to the MT teacher quality (x axis –BLEU score), when using Word-KD and Seq-KD, on the MuST-C en-it dataset.
47Finally, the comparison of Word-KD and Seq-KD results in similar trends and scores. The two methods behave similarly both with low and high quality teachers and they show the same performance. Indeed, the very small BLEU differences can be ascribed to statistical fluctuations and one method is not always better than the other. These results do not confirm the superiority of Word-KD shown in Section 4.2, but the difference can be explained with the different setting and scenario: in Section 4.2 the training set of the MT teacher is the same set on which the ST student is trained, while here the MT teacher is trained on different, out-of-domain corpora.
48All in all, this analysis indicates that only part of the knowledge of the teacher can be learned by the student. Future research might try and explain which information can be learned by the student to provide insights on methods to create models that are better teacher as they focus on what can be learned by student models or to understand how to inject in the student the knowledge of the teacher that current KD methods do not allow to learn.
49In the wake of previous preliminary work showing promising results obtained by distilling knowledge from an MT model to improve direct ST models, in this study we conducted a more systematic and meticulous analysis on the application of KD techniques to train an ST system. First, we compared the methods proposed in literature to distill knowledge in sequence-to-sequence models, as MT and ST systems are. Our experiments show the superiority of the Word-KD technique and the importance of fine-tuning an ST student on the ST data without KD. Second, we studied and showed the benefits and limitations introduced by distilling knowledge from an MT teacher. The different modality and lower information richness of the input also lead to limitations and drawbacks – such as an increased gender bias in gender-marked words that are related to the speaker, and sentence truncation and omission in multi-sentential utterances – that can be overcome with the simple above-mentioned fine-tuning without KD. Third, we demonstrated that the quality of the MT teacher is essential to have good ST systems and that a better MT teacher leads to a better ST student, although the student gains tends to saturate when the teacher scores are high. Overall, our results show that distilling knowledge from MT is a good knowledge transfer technique, which allows to benefit from the abundance of parallel textual data in the ST task. However, it requires some adroitness, as shown by the importance of a KD-independent fine-tuning to solve the undesirable side-effect of learning behaviors of the MT teacher that can be harmful for the task at hand.
50This work is part of the “End-to-end Spoken Language Translation in Rich Data Conditions” project,7 which has been financially supported by an Amazon AWS ML Grant.