Skip to navigation – Site map

HomeIssues20VariaContext concreteness for the seco...

Varia

Context concreteness for the second constituent slows down compound-word processing

Chariton Charitonidis

Abstracts

The present paper investigates the effects of valence, arousal, and concreteness norms produced in Warriner et al. [2013], Brysbaert et al. [2014], and Snefjella & Kuperman [2016] on English compounding. The objects of study are over 2000 non‑spaced (concatenated) compounds taken from the LADEC database (Gagné et al. [2019]). In the multiple regression models, the representation (word-level) and context norms are used as independent variables. The lexical decision and naming times from the English Lexicon Project (ELP) and the British Lexicon Project (BLP) are used as dependent variables. It is found that higher values of context concreteness for the second constituent are associated with slower response times across lexical decision and naming.

Top of page

Full text

I would like to thank Christina Gagné and Victor Kuperman for giving me details on their tests and datasets. My special thanks to the anonymous referees of Lexis for their invaluable comments on an earlier version of this paper.

Introduction

  • 1 Warriner et al.’s [2013] dataset also included ‘dominance’ norms referring to the “degree of contro (...)
  • 2 In the literature and in the present paper, the terms ‘norms’ and ‘ratings’ are used interchangeabl (...)

1In recent years, there has been a considerable focus on the interface of lexical meaning and emotion (for a review of relevant studies, see Citron et al. [2016] and Yao et al. [2016]). Warriner et al. [2013] produced a dataset with norms for English words according to the affective variables ‘valence’ (positivity) and ‘arousal’ (excitement, mood-enhancement).1 Brysbaert et al. [2014] produced a dataset with norms for English words according to the sensorimotor variable ‘concreteness’. Henceforth, the word-level norms from these datasets are referred to with the generic term representation norms.2

2In the following, I give the definition of the above-mentioned variables (Kuperman [2013]). Valence, or emotional positivity, gages the amount of pleasantness or discomfort that a person feels when reading the word, and is measured on a scale from 1 (sad, unhappy) to 9 (happy). Words with extreme average valence ratings are pedophile (1.26) and vacation (8.53). Arousal assesses the level of excitement that raters associate with the read word, and is measured on a scale from 1 (calm) to 9 (excited). Words with extreme average arousal ratings are grain (1.6) and insanity (7.79)… Concreteness assesses, on a scale from 1 to 5, how easily the referent of the word can be seen, heard, felt, smelled, or tasted... Words with extreme average concreteness ratings are: essentialness (1.04) and flashlight (5.00). (Kuperman [2013: 3])

  • 3 In Snefjella & Kuperman [2016], the term ‘content words’ is equivalent to the term ‘non-stopwords’. (...)
  • 4 Also excluded were 493 words whose overall context values “were more than three standard deviations (...)

3In Snefjella & Kuperman [2016], the application of representation norms to the 7 billion token USENET corpus (Shaoul & Westbury [2013]) resulted in valence, arousal, and concreteness norms for word contexts, henceforth referred to as context norms. Each context was confined from five content words before to five content words after a target word.3 Contexts in which fewer than three words matched with representation norms were excluded.4 Accordingly, 14,853 words were considered that had semantic estimates for both representations and contexts. In Table 1, a sample context for the word evidence is given. Blanks indicate the absence of norms for specific words.

Table 1. A sample context for the word evidence (Snefjella & Kuperman [2016: 137])

Word

Valence

Arousal

Concreteness

always

1.71

offer

5.94

3.42

2.23

zero

2.86

factual

5.89

3.05

2.41

logical

6.60

4.11

2.11

evidence

-

-

-

false

2.36

claims

5.15

3.90

unless

1.54

stupid

2.65

4.68

1.75

unable

2.96

3.76

1.77

Mean

4.87

3.82

2.82

  • 5 The full list of norms can be found in the supplementary dataset of Snefjella & Kuperman [2016].

4At the next stage, Snefjella & Kuperman [2016] averaged all context means across all occurrences of each word in the corpus. The resulting norms refer to three meta-variables, i.e. ‘context valence’, ‘context arousal’, and ‘context concreteness’.5 These norms serve as “indices of the overall tendency of a word to occur in positive, exciting, or concrete contexts” (Snefjella & Kuperman [2016: 137]).

5Snefjella & Kuperman [2016: 139] state that “words tend to favour the company of words with similar affective and sensorimotor connotations”. For instance, the noun athlete has a representation valence of 6.16 and a context valence of 5.86, i.e. positive values in both cases, or the noun creatorship has a representation concreteness of 2.58 and a context concreteness of 1.84, i.e. low values in both cases, etc. In Table 2, the moderate to strong positive correlations of representation and context ratings refer to this tendency.

Table 2. Correlations of context and word ratings (Snefjella & Kuperman [2016: 139])

Context valence vs. word valence

.58***

Context arousal vs. word arousal

.48***

Context concreteness vs. word concreteness

.72***

***p < .001

1. Linguistic and psycholinguistic accounts of compounding

1.1. Linguistic descriptions6

  • 6 The text in this section was excerpted from Charitonidis [2014] with minor modifications.

6Lieber & Štekauer [2009] examined a variety of phonological, syntactic, and morphological criteria to distinguish English compounds from phrases or other sorts of derived words. According to these authors, the strongest hints for establishing a word complex as a compound are left-hand stress (’cart-horse), inseparability (*[black ugly bird] for blackbird, a bird species), impossibility of first-stem modification (*a very blackbird), inability to replace the second stem with a pro-form (a riding horse… *the carriage ones), and inflection on the rightmost constituent, i.e. the head (cart-horse-s). However, as the same authors argue, none of these hints can be regarded as an absolute criterion for establishing a word complex as a compound (Lieber & Štekauer [2009: 14]).

7According to Plag [2003: 132] compounding is the most productive word-formation process in English. An inventory of compound types containing two constituent words can be found in Table 3 (Plag [2003: 144]). Compounds with more than two constituent words can be broken down into binary left-branching structures, cf. the binary structure [[bathroom towel] designer] for bathroom towel designer, etc. (see also Haider [2001]).

Table 3. Inventory of compound types in English (Plag [2003: 144])

First constituent

noun (N)

verb (V)

adjective (A)

noun

film society

brainwash

stone-deaf

verb

pickpocket

stir-fry

-

adjective

greenhouse

blindfold

light-green

preposition

afterbirth

-

-

  • 7 Guevara and Scalise’s [2009] sample included Romance, Germanic, Slavic, and East Asian languages.
  • 8 As Jackendoff [2010] reports, “there are some families of left-headed compounds in English, such as (...)

8By examining ~3000 compounds in 16 different languages, Guevara & Scalise [2009: 123] state that “all languages prefer to form right-headed compounds with a certain extent of language-internal variation”.7 In English compounds with two constituent words, the right-hand constituent is, typically, the categorial and semantic head. Following the definitions in Scalise & Fábregas [2010: 124], the categorial head is the unit that defines the lexical category of the whole word (e.g. whiteboard (N) < white (A) + board (N)) and the semantic head is the unit that defines the semantic class of the word (e.g. whiteboard (object) < white (property) + board (object)).8 Regarding semantic headedness, the meaning of the compound is, typically, a hyponym of the meaning of the head (the hypernym), e.g. the meaning of the compound bedroom (the more specific term) is a hyponym of the head word room (the generic term), etc. (for the notion of ‘hyponymy’ see Löbner [2013: 205-207]).

1.2. The processing of compounds and the LADEC database

  • 9 Overviews of word recognition models can be found in Schreuder & Baayen [1995], Kuperman [2013], No (...)

9According to Libben et al. [2020], compounds have a dual nature. They usually contain constituents that are easily identifiable and, at the same time, they are used as unique structures with specific meanings. For the vast majority of compounds, “if a language user did not previously know the meaning of the whole compound word, it would be very difficult to figure it out on the basis of the meanings of the constituent elements alone” (Libben et al. [2020: 340]). Nevertheless, the activation of both whole‑word representations and constituents “is present whether or not compound words are semantically transparent and whether or not they are written with spaces, without spaces, or with hyphens” (Libben et al. [2020: 349]). This dual nature of compound processing is captured by dual- and multiple-route models of word recognition. These models propose that “the meanings of both complex word and its morphemes can be activated simultaneously”, whereby “the processing preference for either the morphemic or the whole-word route is not categorical and can be biased by the formal properties of the complex word” (Kuperman [2013: 1]).9

10Libben et al. [2020] point out that “even partial compositionality provides some aspects of a compound’s meaning. For example, people can determine that a raspberry is some type of berry even though they are not entirely sure what the “rasp” contributes to it” (Libben et al. [2020: 342]). It thus appears that native speakers of English have explicit knowledge of the categorial and semantic head-operations within compounds (see Section 1.1.).

11The Large Database of English Compounds (LADEC: Gagné et al. [2019]) is the largest existing database of compound words. It contains over 8000 nonspaced (“closed” or “concatenated”) compounds (= nouns) selected from various sources including, among others, the CELEX database [Baayen et al. 1995], the English Lexicon Project (ELP) [Balota et al. 2007], the British Lexicon Project (BLP) [Keuleers et al. 2012], the British National Corpus (BNC), and Wordnet. From the full set of LADEC entries, 7,804 compounds (incl. plurals of already listed compounds as separate entries) can be uniquely parsed into two constituents that are free morphemes. A vast variety of compounds is considered, for instance noun-noun compounds, e.g. buttercup, shipyard, etc., compounds with a second constituent derived from a verbal stem, e.g. pacemaker, painkiller, etc. (for definitions see Lieber [2004: 46]). The first non-head constituent refers to a wide range of grammatical categories. Figure 1 contains a brief sample.

Figure 1. LADEC entries: sample

Figure 1. LADEC entries: sample
  • 10 An independent (predictor or explanatory) variable is “the variable which an experimenter deliberat (...)
  • 11 Log (=logarithmic) transformation is a method replacing each variable x with a log(x). In LADEC the (...)
  • 12 A dependent (outcome or criterion) variable is “the variable measured by the experimenter. It is co (...)
  • 13 The SUBTLEX-US corpus is a 51-million-token corpus based on American English subtitles.
  • 14 A control variable is “a variable that is considered to have an effect on the response measure in a (...)
  • 15 Linear regression is a regression analysis in which the predictor or independent variables (xs) ar (...)
  • 16 According to the descriptions in Balota et al. [2007: 446], “In the lexical decision task, particip (...)

12Gagné et al. [2019] include a wide range of predictor (= independent) variables,10 such as letter length, bigram frequency at the morpheme boundary, family size, word frequency, probability and association (vector-based) measures, emotional/sentiment norms computed from participant ratings, etc. The log response times11 for compounds from ELP (lexical decision, naming) and BLP (lexical decision) were used as dependent variables.12 For the most part, compound length (number of letters) and log compound frequency from the SUBTLEX-US corpus (Brysbaert & New [2009])13 and BNC (BLP) were used as control variables.14 In various multiple-regression models,15 the above-mentioned predictor variables had significant effects on lexical decision and naming times.16

  • 17 Steiger’s [1980] z test showed that this difference was significant, z = 27.71, p < .0001 [Gagné et (...)

13The primary focus in Gagné et al. [2019] was placed on various measures of semantic transparency. Gagné et al. [2019] asked participants to rate compounds considering how predictable the meaning of the compound is from its parts (meaning predictability ratings, compound‑based) and how much of the meaning of each of the constituents is retained in the compound (meaning retention ratings, constituent-based). The authors found that the distribution of transparencies for the second constituent was much more peaked and higher than the distribution of transparencies for the first constituent (MC1: 64.80 [SD: 19.59] vs. MC2: 71.00 [SD: 16.46]. N = 8115). However, the rating for the first constituent was more strongly correlated with the rating for the entire compound than was the rating for the second constituent (c1~cmp: r = 0.75, p < .001 vs. c2~cmp: r = 0.66, p < .001. N = 429).17 Most notably, the meaning retention rating for the first constituent and the meaning predictability rating for the compound predicted all three types of response times, i.e. ELP lexical decision, BLP lexical decision, and ELP naming times.

  • 18 By referring to previous research, Gagné et al. [2019] report that “the modifier (the first constit (...)

14To conclude, the peaked and higher distribution of transparencies for the second constituent, along with the first constituent’s better association with the compound’s meaning predictability, appear to be immediately mapped onto the head operations in English compounds (Section 1.1.). The second constituent, i.e. the head, is a unit whose transparency is reinforced categorially and semantically. The first constituent, i.e. the modifier, is the most critical factor in establishing compound reference. Its transparency covaries with the transparency of the compound most strongly. Accordingly, in Gagné et al. [2019], the meaning retention rating for the first constituent was much more critical/predictive of lexical decision and naming times than the meaning retention rating for the second constituent.18

2. Emotion variables in lexical decision and naming: previous research

  • 19 Collinearity in regression analysis is “the situation in which two independent variables are so hig (...)
  • 20 Arguments against residualization in regression analyses can be found in York [2012] and Wurm and F (...)
  • 21 The (unstandardized) regression coefficient (bi) “indicates the strength of relationship between a (...)

15For his analysis of noun-noun compounds, Kuperman [2013] used the valence and arousal norms in Warriner et al. [2013] and the concreteness norms in Brysbaert et al. [2014]. In his regression models, the lexical decision times from ELP (Balota et al. [2007]) served as the dependent variable. The word length of compounds (in characters) and the log-transformed frequencies for the compounds and their first (“left”) and second (“right”) constituents (CELEX database, Baayen et al. [1995]) were used as control variables. To remove collinearity,19 the statistical method of residualization was used. By means of this method, the influence of the first and second constituent was partialed out from the influence of the whole compound.20 Table 4 below summarizes the results. β stands for the standardized regression coefficient.21 A positive β value indicates that higher values of the predictor variable are associated to longer response times, and a negative β value indicates that higher values of the predictor variable are associated to faster response times.

Table 4. Summary of regression models fitted to lexical decision latencies to compounds’ left constituents presented as isolated words (column B), compounds’ right constituents presented as isolated words (column C), and compound words (column D) (Kuperman [2013: 5])

A. Variable

B. RT to left

C. RT to right

D. RT to compounds

β

p

β

p

β

p

Valence of left

-0.012

0.003

-0.010

0.009

Valence of right

-0.017

0.001

-0.014

0.002

Valence of compound

-0.018

< 0.001

Arousal of left

ns

ns

Arousal of right

-0.012

0.040

ns

Arousal of compound

ns

Concreteness of left

-0.014

0.002

ns

Concreteness of right

-0.008

0.044

ns

Concreteness of compound

-0.023

0.008

N = 557 (valence, arousal). N = 704 (concreteness). ns: not significant

16Column D in Table 4 refers to the final stage of the analysis, at which each model included the valence, arousal, and concreteness values for both compounds and compound constituents as critical predictors. As can be seen, the compound and constituent valences were significant, latency-reducing, predictors. The concreteness of compound – but not the concreteness of constituents – was associated to faster response times. No arousal predictor was found to be significant. It should be noted that in Kousta et al. [2009], words with both positive and negative valence were associated to faster response times, as opposed to Kuperman’s [2013] results.

  • 22 In Table 5, the three morphological levels are indicated with ‘cmp’ (compound), ‘c1’ (first constit (...)

17Gagné et al. [2019] set out four regression models for valence based on larger sets of items (see Section 1.2. for details). They did not use residualization in their analyses. No effects of constituent valences were detected, except for the valence of the second constituent in the BLP[SUBTLEX] model, see Table 5.22

Table 5. Standardized regression coefficients from models using frequency, stimulus length (in letters), and valence to predict English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times (Gagné et al. [2019])

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.302***

-0.417***

-0.290***

BNC frequency

-0.534***

stimlen

0.199***

-0.032

0.009

0.248***

valence_cmp

-0.165***

-0.218***

-0.151***

-0.049

valence_c1

-0.054

-0.022

-0.015

-0.055

valence_c2

-0.003

0.087**

0.046

-0.004

N

1076

950

950

1076

adj. R-sq

0.187

0.223

0.334

0.168

AIC

-3605.7

-3567.7

-3716.5

-4148.0

*p < .05, **p < .01, ***p < .001

18To sum up, previous research on compound processing has pointed out the beneficial, i.e. latency-reducing, effects of compound valence and concreteness. The conflicting results for constituent valence are likely to be rooted in the different items and methods used in the respective studies.

3. The present study

19The present study seeks to validate the effects of representation and context valence, arousal, and concreteness on a large sample of nominal compounds that is, additionally, not restricted to a particular lexical category regarding the first constituent. This sample is provided by the LADEC database (Gagné et al. [2019]).

  • 23 Model log likelihood is an integral part of the AIC measure (see the lnL part of the AIC equation i (...)

20Snefjella & Kuperman [2016: 140] report that, in both ELP (Balota et al. [2007]) and BLP (Keuleers et al. [2012]), the addition of context valence, arousal, and concreteness as predictors improved model likelihood above representation ratings.23 It is expected that in the time course of compound recognition, context norms will play an important role by improving the efficiency, i.e. goodness of fit, of the regression models.

21The research questions are:

  1. Do emotion variables call for head operations in lexical decision (comprehension) and naming (production)?

  2. Are context variables of emotion relevant in the time course of compound processing on a par with the corresponding representation variables?

    • 24 The term semantic dimension is used by the author for referring to valence, arousal, and concretene (...)

    Can an intervening level of significant representation and context predictors within semantic dimensions24 improve the goodness of fit of general time-course models?

22This paper is structured as follows. Section 4 presents the methods. In Section 5 are discussed (a) the correlations between the representation and context norms of emotion, and (b) the correlations between the emotion and semantic-transparency norms. 44 multiple-regression models of lexical decision and naming times are built, based on various combinations of emotion variables. In Section 6, the goodness-of-fit values for all models are compared. In Section 7, the results are discussed.

4. Methods

23The methods in this paper largely conform to the linear multiple-regression methods in Gagné et al. [2019]. As in Gagné et al. [2019], the forced-entry regression method was used. All predictors were entered simultaneously, to model response times from ELP (Balota et al. [2007]; lexical decision and naming times) and BLP (Keuleers et al. [2012]; lexical decision times). The control variables used in the regressions were the variables standardly employed in Gagné et al.’s [2019] models, i.e. compound length (number of letters) and log compound frequency from the SUBTLEX-US corpus (Brysbaert & New [2009]) and BNC (BLP). As a result of these considerations, four models for each combination of parameters were obtained, i.e. an ELP lexical decision model, a BLP lexical decision model with SUBTLEX-US frequency (referred to as “BLP[SUBTLEX] model”), a BLP lexical decision model with BNC frequency (referred to as “BLP[BNC] model”), and an ELP naming model.

24The model construction had four parts. In the first part (Section 5.1.), the four models for representation valence from Gagné et al. [2019] were supplemented by four models for context valence. 16 additional models were constructed considering the corresponding arousal and concreteness predictors. All 24 models are henceforth referred to as “local models”. In the second part (Section 5.2.), four models were constructed including all significant predictors from the local models across semantic dimensions. These models are henceforth referred to as “global models”. In the third part (Section 5.3.), 12 models were constructed combining the significant representation and context predictors from the local models along a particular semantic dimension. These models are henceforth referred to as “nested models”. In addition, four models were constructed considering all significant predictors from the nested models across semantic dimensions. These models are henceforth referred to as “meta-models”.

25Figure 2 below displays the steps in the model construction process by referring to the ELP lexical decision models. The eleven rectangle shapes, one for each model, contain a special combination (conjunction) of emotion variables. The transfer of significant variables from the source models to the target models is indicated with arrows. It goes from the Local models to the Global modes (left-hand transfer, one-step derivation) and from the Local models to the Meta-models (right-hand transfer, two-step derivation). “cnorms” refers to context norms and the appended letters “V”, “A”, and “C” to valence, arousal, and concreteness, respectively.

Figure 2. English Lexicon Project (ELP) lexical decision (LD) models (Sections 5.1-5.3)

Global models

<<

Local models

>>

Nested models

>>

Meta-models

valence_cmp

valence_cmp

valence_cmp

valence_cmp

valence_c1 [ns]

valence_c2 [ns]

[ns]

cnormsV_cmp

[ns]

[ns]

cnormsV_c1

cnormsV_c1

[ns]

cnormsV_c2 [ns]

arousal_cmp [ns]

[ns]

arousal_c1

[ns]

[ns]

arousal_c2

[ns]

[ns]

cnormsA_cmp

cnormsA_cmp

[ns]

[ns]

cnormsA_c1

cnormsA_c1

[ns]

[ns]

cnormsA_c2

[ns]

[ns]

concreteness_cmp

[ns]

concreteness_c1 [ns]

concreteness_c2 [ns]

[ns]

cnormsC_cmp

[ns]

cnormsC_c1 [ns]

cnormsC_c2

cnormsC_c2

cnormsC_c2

cnormsC_c2

[ns]: not significant

26Goodness of fit was evaluated with two measures: adjusted R-squared and AIC. AIC was calculated as follows: -2lnL+2k, in which lnL refers to the maximized/full log-likelihood of the model and k refers to the number of parameters including the constant. The lower (= more negative) the AIC value, the better the fit of the model. AIC is sensitive to sample size. Consequently, a scaled AIC value for each model was computed to facilitate the model comparison (see the label ‘AIC/N’ in the analyses to follow).

4.1. Independent variables

  • 25 This version was also used by Snefjella & Kuperman [2016] and Gagné et al. [2019].
  • 26 Gagné et al. [2019] use the abbreviations “stim” and “cmp” for “compound” interchangeably.

27In the multiple-regression models, the representation (word-level) and context norms of valence, arousal, and concreteness were used as the main variables of interest. The norms of representation valence and arousal were taken from a modified and expanded version of Warriner et al.’s [2013] dataset, i.e. Kuperman [2020].25 The norms of representation concreteness were taken from Brysbaert et al. [2014]. The context norms were taken from Snefjella & Kuperman [2016]. Regarding data processing, the norms of representation arousal and all context norms were merged with the LADEC database (Gagné et al. [2019]) at all three morphological levels, i.e. compound (cmp), first constituent (c1), and second constituent (c2).26

28The summary statistics for all representation and context variables are provided in Table 6 and Table 7, respectively. It should be noted that in Gagné et al. [2019] and in the present paper only correctly parsed compounds were considered, see the variable “correctParse” in the respective data sets. The full set of variables used in the present paper can be found in Appendix B.

Table 6. Descriptive statistics for representation valence, arousal, and concreteness

Variable

N

Mean

SD

Min

Max

valence_cmp

2073

5.21

1.19

1.59

8.30

valence_c1

7500

5.59

1.15

1.43

8.37

valence_c2

7379

5.58

0.96

1.53

8.26

arousal_cmp

2073

4.16

0.96

1.88

7.45

arousal_c1

7500

3.92

0.99

1.58

7.81

arousal_c2

7379

3.83

0.86

1.58

7.45

concreteness_cmp

4331

3.98

0.77

1.27

5.00

concreteness_c1

8204

4.18

0.82

1.43

5.00

concreteness_c2

6084

4.27

0.78

1.22

5.00

Table 7. Descriptive statistics for context valence, arousal, and concreteness

Variable

N

Mean

SD

Min

Max

cnormsV_cmp

5617

5.63

0.24

3.02

6.82

cnormsV_c1

8005

5.60

0.17

4.60

6.33

cnormsV_c2

8174

5.59

0.18

4.32

6.52

cnormsA_cmp

5617

4.05

0.17

2.95

5.45

cnormsA_c1

8005

4.08

0.14

3.53

5.12

cnormsA_c2

8174

4.08

0.15

3.46

5.04

cnormsC_cmp

5617

3.23

0.26

1.79

4.62

cnormsC_c1

8005

3.24

0.19

2.68

4.38

cnormsC_c2

8174

3.21

0.19

2.64

4.24

29As already mentioned, the word length of compounds (in characters) and the log word frequency for the compounds in the SUBTLEX-US corpus (Brysbaert & New [2009]) and the BNC corpus (BLP: Keuleers et al. [2012]) were used as control variables. The summary statistics for the control variables are provided in Table 8.

Table 8. Descriptive statistics for word length (in characters) and log word frequency from SUBTLEX and BNC (BLP)

Variable

N

Mean

SD

Min

Max

Word length

8372

9.21

1.62

6

17

SUBTLEX frequency (per million)

4944

1.06

0.62

0.30

4.36

BNC frequency (per million)

2400

-0.49

0.69

-2.00

2.27

4.2. Dependent variables

30The log response times for the compounds from the English Lexicon Project (ELP: Balota et al. [2007]) and the British Lexicon Project (BLP: Keuleers et al. [2012]) were used as dependent variables. Table 9 below contains the summary statistics for the lexical decision and naming times with reference to the origin and log-transformed data.

Table 9. Descriptive statistics for lexical decision times from the English Lexicon Project (ELP) and British Lexicon Project (BLP) and for naming times from ELP

Variable

N

Mean

SD

Min

Max

ELP RT

2942

805.02

125.76

533.64

1587.50

ELP RT (log10)

2942

2.90

0.06

2.73

3.20

BLP RT

2400

684.12

81.77

431.00

1274.00

BLP RT (log10)

2400

2.83

0.05

2.63

3.11

ELP naming

2943

715.64

90.53

545.74

1157.50

ELP naming (log10)

2943

2.85

0.05

2.74

3.06

31The input data for all independent and dependent variables can be found in the supplementary dataset (Researchgate.net).

5. Analyses

  • 27 Listwise exclusion (or listwise deletion) removes all cases with at least one missing value in one (...)

32Before I proceed to the main results of this study I would like to produce the relationships of representation and context predictors employed in the multiple-regression models. Table 10 provides the respective Pearson correlations. Information was available for 988 items (listwise exclusion).27 Henceforth, I will refer to the significant correlations alone. Correlations below 0.3 will be regarded as weak, correlations from 0.3 to 0.7 as moderate, and correlations higher than 0.7 as strong.

  • 28 With reference to representation valence and concreteness, there were moderate positive correlation (...)

33Most of the highest correlations referred to the relations between representation and context norms. In particular, all representation norms correlated moderately and positively with the context norms at the same morphological level (cmp, c1, c2). The strong correlation between representation and context concreteness, reported in Snefjella & Kuperman [2016], could not be confirmed (see Table 2 in the Introduction).28

  • 29 The labels for the semantic transparency variables, i.e. “ratingcmp”, “ratingC1”, and “ratingC2”, w (...)

34Let us now look at the relationships of representation and context norms to the semantic transparency norms in Gagné et al. [2019]. Tables 11 and 12 provide the respective Pearson correlations. Information for all variables was available for 981 items (listwise exclusion).29

  • 30 The Steiger’s [1980] z two-tailed test was conducted using the package cocor for the R programming (...)

35The correlations between the semantic transparency norms themselves (Section 1.2.) were confirmed. In particular, the meaning retention rating for the first constituent correlated strongly with the meaning predictability rating for the compound, r = 0.77, p < .001. The meaning retention rating for the second constituent correlated moderately with the meaning predictability rating for the compound, r = 0.69, p < .001. As in Gagné et al. [2019], Steiger’s [1980] z test for comparing correlations showed that the above-mentioned difference in correlation strength was significant, z = 3.98, p < .001 (two-tailed).30 As in Gagné et al. [2019], the meaning retention ratings for the first and second constituent correlated weakly with one another, r = 0.25, p < .001.

  • 31 It should be noted that the second highest (weak to moderate) correlation was between representatio (...)

36Most notably, the semantic transparency norms in Gagné et al. [2019] correlated weakly with the representation and context norms of valence, arousal, and concreteness. As an exception, there was a positive moderate correlation between the meaning retention norms for the second constituent and the representation concreteness norms for the full compound, i.e. r = 0.33, p < .001 (Table 11). This correlation may be a function of the morphological head.31

In a nutshell, these patterns suggest that (a) valence and arousal is largely dissociated from semantic transparency at both representation and context level, and (b) representation concreteness for the compound interacts considerably with the meaning retention rating for the second constituent alone.

Table 10. Pearson correlations among representation norms of emotion (Brysbaert et al. [2014], Kuperman [2020]) and context norms of emotion (Snefjella & Kuperman [2016])

Table 10. Pearson correlations among representation norms of emotion (Brysbaert et al. [2014], Kuperman [2020]) and context norms of emotion (Snefjella & Kuperman [2016])

Table 11. Pearson correlations among representation norms of emotion (Brysbaert et al. [2014], Kuperman [2020]), and semantic transparency norms (Gagné et al. [2019])

Table 11. Pearson correlations among representation norms of emotion (Brysbaert et al. [2014], Kuperman [2020]), and semantic transparency norms (Gagné et al. [2019])

Table 12. Pearson correlations among context norms of emotion (Snefjella & Kuperman [2016]) and semantic transparency norms (Gagné et al. [2019])

Table 12. Pearson correlations among context norms of emotion (Snefjella & Kuperman [2016]) and semantic transparency norms (Gagné et al. [2019])

5.1. Local models

37This section contains the multiple‑regression models of lexical decision and naming times with reference to a particular representation or context variable. These models are referred to as “local models”.

5.1.1. Local models with representation norms

38Table 13 below contains the lexical decision and naming models for representation valence from Gagné et al. [2019], already presented in Section 2. An extra row at the end of the table contains scaled AIC values (AIC/N). As already pointed out, compound valence was a significant negative predictor of lexical decision times. With the exception of the BLP[SUBTLEX] model, there were no effects of constituent valences.

39In the ELP naming model there were no valence effects. As Kuperman [2013: 5] points out, “speeded naming has been repeatedly shown to be a more shallow task in that it does not implicate word semantics… and can be performed on a purely formal basis” (see also Balota et al. [2004], Ferrand et al. [2011], etc.).

Table 13. Local models for representation valence: Standardized regression coefficients from models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times (Gagné et al. [2019])

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.302***

-0.417***

-0.290***

BNC frequency

-0.534***

stimlen

0.199***

-0.032

0.009

0.248***

valence_cmp

-0.165***

-0.218***

-0.151***

-0.049

valence_c1

-0.054

-0.022

-0.015

-0.055

valence_c2

-0.003

0.087**

0.046

-0.004

N

1076

950

950

1076

adj. R-sq

0.187

0.223

0.334

0.168

AIC

-3605.7

-3567.7

-3716.5

-4148.0

AIC/N

-3.35102

-3.75547

-3.91211

-3.85502

*p < .05, **p < .01, ***p < .001

40Table 14 below contains the lexical decision and naming models for representation arousal. In line with Kuperman [2013], no effects were found at compound level. In the ELP and BLP lexical decision models, arousal for the first constituent was a significant positive predictor, as opposed to Kuperman [2013] in which this predictor was not significant. In the ELP naming model no arousal effects were found.

Table 14. Local models for representation arousal: Standardized regression coefficients from models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.327***

-0.439***

-0.310***

BNC frequency

-0.560***

stimlen

0.182***

-0.040

0.004

0.237***

arousal_cmp

0.005

0.030

-0.011

0.046

arousal_c1

0.076*

0.088**

0.058*

0.060

arousal_c2

0.082**

0.034

-0.024

0.040

N

1076

950

950

1076

adj. R-sq

0.162

0.187

0.315

0.171

AIC

-3573.904

-3525.186

-3690.052

-4151.738

AIC/N

-3.32147

-3.71072

-3.88427

-3.85849

*p < .05, **p < .01, ***p < .001

41Table 15 below contains the lexical decision and naming models for representation concreteness. In the lexical decision models, high concreteness for the compound predicted faster response times, in line with Kuperman [2013]. In the BLP models, high concreteness for the first constituent was associated to faster response times, and high concreteness for the second constituent was associated to longer response times. As opposed to these results, Kuperman [2013] found no effects of constituent concreteness. In the ELP naming model no concreteness effects were found, similar to the ELP naming models for valence and arousal.

Table 15. Local models for representation concreteness: Standardized regression coefficients from models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.505***

-0.492***

-0.420***

BNC frequency

-0.506***

stimlen

0.220***

0.005

0.071**

0.273***

concreteness_cmp

-0.055*

-0.119***

-0.173***

0.011

concreteness_c1

0.006

-0.059*

-0.055*

0.002

concreteness_c2

0.015

0.094***

0.080**

0.014

N

2116

1313

1465

2117

adj. R-sq

0.335

0.271

0.289

0.277

AIC

-6718.59

-4630.23

-5057.27

-7410.74

AIC/N

-3.17514

-3.52645

-3.45206

-3.50059

*p < .05, **p < .01, ***p < .001

5.1.2. Local models with context norms

42Table 16 below contains the lexical decision and naming models for context valence. In both lexical decision and naming, high context valence for both the compound and the first constituent predicted faster response times. This pattern clearly shows that context valence maps an evaluative head onto the first constituent. Higher values of context valence for the second constituent predicted faster response times in the BLP[BNC] and ELP naming models alone.

Table 16. Local models for context valence: Standardized regression coefficients from models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.491***

-.0461***

-0.408***

BNC frequency

-0.479***

stimlen

0.211***

-0.006

0.043*

0.271***

cnormsV_cmp

-0.056**

-0.079***

-0.092***

-0.037*

cnormsV_c1

-0.066***

-0.064**

-0.043*

-0.044*

cnormsV_c2

-0.017

-0.032

-0.037*

-0.044*

N

2375

1923

2307

2376

adj. R-sq

0.316

0.231

0.251

0.270

AIC

-7543.7

-6825.5

-7963.3

-8399.6

AIC/N

-3.17496

-3.54940

-3.45180

-3.53519

*p < .05, **p < .01, ***p < .001

43Table 17 below contains the lexical decision and naming models for context arousal. With the exception of the BLP[BNC] model, higher values of context arousal predicted longer lexical decision and naming times. In the ELP and BLP[SUBTLEX] models, higher values of context arousal predicted longer response times for both the compound and the first constituent. In the ELP naming model, only the predictors for the constituents were successful.

Table 17. Local models for context arousal: Standardized regression coefficients from models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.509***

-0.481***

-0.422***

BNC frequency

-0.482***

stimlen

0.201***

-0.015

0.039*

0.264***

cnormsA_cmp

0.052**

0.045*

-0.045*

0.025

cnormsA_c1

0.062***

0.046*

-0.002

0.057**

cnormsA_c2

0.039*

0.012

0.036

0.054**

N

2375

1923

2307

2376

adj. R-sq

0.317

0.221

0.239

0.272

AIC

-7546.49

-6802.122

-7925.572

-8405.24

AIC/N

-3.17747

-3.53724

-3.43545

-3.53756

*p < .05, **p < .01, ***p < .001

44Table 18 below contains the lexical decision and naming models for context concreteness. In the lexical decision models, higher values of context concreteness for the compound predicted faster response times, similar to the effects of representation concreteness. With the exception of the BLP[BNC] model, higher values of context concreteness for the second constituent predicted longer response times across lexical decision and naming.

Table 18. Local models for context concreteness: Standardized regression coefficients from models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.490***

-0.464***

-0.409***

BNC frequency

-0.486***

stimlen

0.204***

-0.017

0.035

0.266***

cnormsC_cmp

-0.053**

-0.053*

-0.075***

-0.014

cnormsC_c1

0.025

-0.009

-0.007

0.013

cnormsC_c2

0.048**

0.061**

0.018

0.040*

N

2375

1923

2307

2376

adj. R-sq

0.310

0.221

0.242

0.264

AIC

-7522.264

-6801.602

-7934.436

-8381.998

AIC/N

-3.16727

-3.53697

-3.43929

-3.52778

*p < .05, **p < .01, ***p < .001

45The positive predictors of context concreteness for the second constituent in both lexical decision and naming represent a unique effect. I argue that this effect is called for by the morphological head (Section 1.1.). To preserve the hyponymy relation and narrow down the referential workings of the first constituent (Section 1.2.), the second constituent should maintain a certain level of contextual abstractness. To put it another way, a second compound constituent that, as a free morpheme, appears typically in contexts of high concreteness would inhibit the referential function of the first constituent.

46According to Paivio [1978: 381, 2007: 330-357], concrete words refer to both a verbal (language) and a non-verbal (perceptual) code, that is they are doubly activated. In concrete contexts many words of high concreteness show up. I assume that, at least in the domain of compounding, these words call for an orthogonally different (perhaps associative/procedural or episodic-memory) task than visual whole-word recognition. In a nutshell, if the second compound constituent corresponds to a standalone word that usually appears in contexts of high concreteness, then the strong associative value of this constituent would run counter to accelerating effects of compound representations (see also Section 7).

47To conclude, this section showed that, in compounding, higher values of context valence accentuate processing, while higher values of context arousal slow down processing. In both lexical decision and naming, (a) context valence introduced an evaluative head for the first constituent, and (b) both higher values of context arousal for the first constituent and higher values of context concreteness for the second constituent predicted longer response times.

5.2. Global models

48The analysis in this section considers all significant representation and context predictors from the local models across variables. The resulting models are referred to as “global models”. It should be noted that the samples of global models were considerably smaller than the samples of local models due to the different items contained in the source datasets (listwise exclusion). Table 19 below displays the test results.

Table 19. Global models: Standardized regression coefficients from models using all significant representation and context predictors from the local models to predict English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.332***

-0.436***

-0.411***

BNC frequency

-0.580***

stimlen

0.181***

-0.019

0.024

0.270***

valence_cmp

-0.140***

-0.200***

-0.097*

valence_c1

valence_c2

0.030

cnormsV_cmp

-0.016

0.067

-0.009

-0.040*

cnormsV_c1

-0.059

-0.043

-0.059

-0.023

cnormsV_c2

-0.023

-0.023

arousal_cmp

arousal_c1

0.002

0.034

0.017

arousal_c2

0.041

cnormsA_cmp

-0.018

0.004

-0.081*

cnormsA_c1

0.019

0.013

0.046*

cnormsA_c2

0.044

0.058**

concreteness_cmp

-0.068

-0.088

-0.115**

concreteness_c1

-0.045

-0.074*

concreteness_c2

-0.030

-0.055

cnormsC_cmp

-0.007

0.012

-0.014

cnormsC_c1

cnormsC_c2

0.066*

0.092*

0.058**

N

854

594

612

2376

adj. R-sq

0.204

0.256

0.366

0.276

AIC

-2841.37

-2231.362

-2401.364

-8417.148

AIC/N

-3.32713

-3.75650

-3.92380

-3.54257

*p < .05, **p < .01, ***p < .001

49In the ELP and BLP[SUBTLEX] lexical-decision models, only two variables were successful, i.e. representation valence for the compound and context concreteness for the second constituent. The last one was also successful in the naming model, together with context valence for the compound, and context arousal for the first and second constituent. Most notably, context concreteness for the second constituent was successful in models including the same control variable for word-frequency, i.e. SUBTLEX-US.

50The BLP[BNC] model was considerably different. In particular, the negative predictors of representation concreteness for both the compound and the first constituent were successful, together with the negative predictors of representation valence and context arousal for the compound.

5.3. Nested models and meta-models

51This section elaborates the assumption of a unified lexical encoding of representation and context norms of valence, arousal, and concreteness, proposed in Snefjella & Kuperman [2016: 144]. In particular, the analysis followed two steps. First, the significant representations and context predictors of valence, arousal, and concreteness from the local models were combined into three general models of valence, arousal, and concreteness. These models are referred to as “nested models”. Second, the successful predictors within nested models were juxtaposed across semantic dimensions. The resulting models are referred to as “meta-models”. The main rationale behind of setting up these models was a (presumable) advantage of referring to a smaller set of predictors than that included in the global models (Section 5.2.). A smaller set of predictors is typically associated with more efficient models (models with a lower information loss). In this context, the Akaike information criterion (AIC) penalizes, as a goodness-of-fit measure, the use of a large number of predictors that, potentially, result in higher AIC values (see the +2k part of the AIC equation in Section 4, where k stands for the number of parameters). The statistical data for both nested models and meta-models can be found in Appendix A.

52Let us first see the patterns of nested models. In the lexical-decision models (a) the negative predictors of representation valence for the compound were successful, while the context predictors were, for the most part, suppressed, (b) all predictors for representation arousal were suppressed, while the positive predictors for context arousal were mostly successful, and (c) most of the negative predictors for representation concreteness were successful. The positive predictors of context concreteness for the second constituent were successful in both lexical decision and naming.

53Let us now go to the patterns of meta-models. The significant representation and context predictors in these models were almost identical to the significant predictors in the global models (Section 5.2.). Only a new significant predictor showed up, that is representation concreteness for the compound in the BLP[SUBTLEX] model. The scaled AIC value of this meta-model is comparable to (slightly better than) the scaled AIC value of the corresponding global model, suggesting the relevance of this additional pattern.

54As Connell & Lynott [2012] report, “concreteness effects refer to a behavioral advantage for words that refer to concrete concepts, which are processed more quickly and accurately than abstract concepts in tasks such as lexical decision and word naming” (see Connell & Lynott [2012: 1428] and the references therein). Contrary to these statements, in all naming models built for the present paper and at all morphological levels, representation concreteness was not successful – the same applies to the effects of representation valence and arousal. With reference to the nested models and meta-models in this section, representation concreteness for the compound was successful in the BLP lexical-decision models alone. More research is required to demarcate the scope of representation concreteness in the time course of compound processing.

6. Juxtaposing measures of goodness of fit

55In this section, the focus is on the goodness-of-fit of lexical decision and naming models built for the present paper, in particular global models and meta-models. For this task, all models were compared to one another with reference to the measures ‘adjusted R2’ and ‘AIC’.

56Tables 20 and 21 below contain the full set of values arranged in descending order, that is better models appear at the top. The subscripts ‘v’, ‘a’, and ‘c’ stand for ‘valence’, ‘arousal’, and ‘concreteness’. ‘rep’ and ‘con’ stand for ‘representation’ and ‘context’. The values for the global models and meta-models are indexed by the subscripts ‘global’ and ‘meta’, respectively, and appear in bold face.

Table 20. Adjusted R2 values for local models, global models, nested models, and meta-models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

ELP LD

BLP LD[SUBTLEX]

BLP LD[BNC]

ELP Naming

0.335c_rep_local

0.271c_rep_local

0.366global

0.277c_rep_local

0.332c_nested

0.270c_nested

0.356meta

0.276global

0.317a_con_local

0.265meta

0.335v_nested

0.276meta

0.316v_con_local

0.256global

0.334v_rep_local

0.273a_nested

0.310c_con_local

0.234a_nested

0.315a_rep_local

0.272a_con_local

0.310a_nested

0.231v_con_local

0.289c_rep_local

0.270v_nested

0.204global

0.223v_rep_local

0.287c_nested

0.270v_con_local

0.192meta

0.223v_nested

0.251v_con_local

0.264c_con_local

0.187v_rep_local

0.221c_con_local

0.242c_con_local

0.263c_nested

0.186v_nested

0.221a_con_local

0.239a_nested

0.171a_rep_local

0.162a_rep_local

0.187a_rep_local

0.239a_con_local

0.168v_rep_local

Table 21. Scaled AIC values for local models, global models, nested models, and meta-models predicting English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

ELP LD

BLP LD[SUBTLEX]

BLP LD[BNC]

ELP Naming

-3.35102v_rep_local

-3.77919meta

-3.92797v_nested

-3.85849a_rep_local

-3.32713global

-3.75650global

-3.92380global

-3.85502v_rep_local

-3.32147a_rep_local

-3.75608v_nested

-3.91418meta

-3.54257global

-3.31921v_nested

-3.75547v_rep_local

-3.91211v_rep_local

-3.54257meta

-3.31829meta

-3.71072a_rep_local

-3.88427a_rep_local

-3.53756a_con_local

-3.20034a_nested

-3.55872a_nested

-3.45206c_rep_local

-3.53728a_nested

-3.17747a_con_local

-3.54940v_con_local

-3.45180v_con_local

-3.53519v_nested

-3.17514c_rep_local

-3.53724a_con_local

-3.44958c_nested

-3.53519v_con_local

-3.17496v_con_local

-3.53697c_con_local

-3.44390a_nested

-3.52778c_con_local

-3.16727c_con_local

-3.53021c_nested

-3.43929c_con_local

-3.50059c_rep_local

-3.15927c_nested

-3.52645c_rep_local

-3.43545a_con_local

-3.49349c_nested

As can be seen, both global models and meta-models have similar goodness-of-fit values. For the most part, they refer to a higher adjusted R2 value and a lower (= more negative) AIC value than local and nested models, while global models are slightly better than meta-models. As an exception, the BLP[SUBTLEX] meta-model shows a better fit than the corresponding global model. By focussing on the AIC measure, the global models achieved a very good ranking across tasks while mostly competing with the local models for representation valence.

7. Discussion

57With reference to the research questions in Section 3, the results of this study are:

  1. In both lexical decision and naming, context valence maps an evaluative head onto the first constituent. In particular, the negative predictor of context valence for the first constituent co-occurs with the negative predictor of context valence for the compound. This left-hand head is diametrically opposed to the right-hand morphological head.

  2. When juxtaposed to representation variables of emotion within time-course models, context variables of emotion are, for the most part, not favoured. Context concreteness for the second constituent is significant alone.

  3. An intervening level of significant representation and context predictors within semantic dimensions does not necessarily improve the goodness of fit of general time-course models.

58Let us now discuss the results in detail. The local lexical decision models (Section 5.1.) showed that, with reference to a particular semantic dimension, the effect directions of representation and context norms were similar (see also Snefjella & Kuperman [2016: 144]). In the local naming models, the patterns were dissociated. While there were no effects of representation norms, the effects of context norms were considerable. Most notably, in all naming models, the context variables for the second constituent were mapped onto significant coefficients (negative for context valence and positive for context arousal and concreteness).

59In the global models (Section 5.2.) all significant predictors from the local models were considered across semantic dimensions. In the lexical decision models, representation valence for the compound and context concreteness for the second constituent were successful (ELP and BLP[SUBTLEX] models). In the BLP[BNC] model, representation valence for the compound was successful on a par with representation concreteness for the compound and the first constituent. In the naming model, most context predictors from the local models remained significant.

60The nested models (Section 5.3.) posited a unified encoding of representation and context variables along a particular semantic dimension, as suggested by Snefjella & Kuperman [2016]. As in the global models, the input predictors were the significant predictors from the local models. In the lexical decision models, (a) representation valence for the compound was successful while almost all predictors for context valence were suppressed, (b) all predictors for representation arousal were suppressed, while most predictors for context arousal were successful, and (c) most predictors for representation concreteness remained significant (BLP models). Context concreteness for the second constituent was successful in both lexical decision and naming.

61The meta-models adopted the significant predictors from the nested models. The output set of significant predictors was almost identical to the output set of significant predictors in the global models. Most notably, representation concreteness for the compound was successful in both the BLP[SUBTLEX] and BLP[BNC] models.

62In both global models and meta-models, the positive coefficients of context concreteness for the second constituent suggest that lower values of context concreteness facilitate processing while higher values of context concreteness slow down processing. This pattern applies to both lexical decision and naming, implying that context concreteness for the second constituent is a highly important, perhaps obligatory, parameter in the time course of compound processing.

63The similar effects of context concreteness for the second constituent in both lexical decision and naming are compatible with semantic neighbourhood effects (Buchanan et al. [2001], Danguecan & Buchanan [2016]). Buchanan et al. [2001: 531] report that “semantic neighbourhood can predict performance on both lexical decision and word naming”. In an object-based view of the semantic system, “activation spreads to (or otherwise facilitates the activation of) words that are highly related but are not necessarily similar in terms of their features” (Buchanan et al. [2001: 532]. This pattern is borne out in contexts of high concreteness in which, typically, a large number of related concrete words shows up. From a different perceptive, a large number of related concrete words may call for latency-inducing “competition effects” (Danguecan & Buchanan [2016: 4]).

64It remains to be seen whether context concreteness for the second constituent can reconstruct the notion of morphological head in English compounding along with highly relevant theoretical notions such as hyponymy (Gagné et al. [2020]). There are two important requirements for this reconstruction: the included variables should be (a) successful across lexical decision and naming, and (b) favoured regarding model-fit tests, that is, the omission of either variable should lead to inferior models.

65To conclude, the present paper confirmed the dual nature of compound processing by including both semantic and formal parameters in various time-course models.

Top of page

Bibliography

APA Dictionary of Psychology. American Psychological Association, https://dictionary.apa.org.

Baayen R. Harald, Piepenbrock Richard & Gulikers Leon, 1995, The CELEX lexical database, Data set, Release 2, CD-ROM, Linguistic Data Consortium, Philadelphia: University of Pennsylvania, also available at https://catalog.ldc.upenn.edu.

Balota David A., Cortese Michael J., Sergent-Marshall Susan D., Spieler Daniel H. & Yap Melvin J., 2004, “Visual word recognition for single-syllable words”, Journal of Experimental Psychology: General 133(2), 283-316.

Balota David A., Yap Melvin J., Cortese Michael J., Hutchison Keith I., Kessler Brett, Loftis Bjorn, Neely James H., Nelson Douglas L., Simpson Greg B. & Treiman Rebecca, 2007, “The English Lexicon Project”, Behavior Research Methods 39, 445-459.

Brysbaert Marc & New Boris, 2009, “Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English”, Behavior Research Methods 41, 977-990.

Brysbaert Marc, Warriner Amy Beth & Kuperman Victor, 2014, “Concreteness ratings for 40 thousand generally known English word lemmas”, Behavior Research Methods 46, 904-911.

Buchanan Lori, Westbury Chris & Burgess Curt, 2001, “Characterizing semantic space: Neighborhood effects in word recognition”, Psychonomic Bulletin & Review 8(3), 531-544.

Charitonidis Chariton, 2014, “The linking of denotational and socio-expressive heads in Modern Greek and English compounding”, Italian Journal of Linguistics / Rivista di Linguistica 26(2), 9-50.

Citron Francesca M. M., Cacciari Cristina, Kucharski Michael, Beck Luna, Conrad Markus & Jacobs Arthur M., 2016, “When emotions are expressed figuratively: Psycholinguistic and affective norms of 619 idioms for German (PANIG)”, Behavior Research Methods 48, 91-111.

Collins English Dictionary. HarperCollins Publishers, https://www.collinsdictionary.com.

Connell Louise & Lynott Dermot, 2012, “Strength of perceptual experience predicts word processing performance better than concreteness or imageability”, Cognition 125(3), 452-465.

Danguecan Ashley N. & Buchanan Lori, 2016, “Semantic neighborhood effects for abstract versus concrete words”, Frontiers in Psychology 7, Article 1034.

Ferrand Ludovic, Brysbaert Marc, Keuleers Emmanuel, New Boris, Bonin Patrick, Méot Alain, Augustinova Maria & Pallier Christophe, 2011, “Comparing word processing times in naming, lexical decision, and progressive demasking: evidence from Chronolex”, Frontiers in Psychology 2, Article 306.

Field Andy, 2009 [2000], Discovering statistics using SPSS, London: Sage Publications.

Gagné Christina L., Spalding Thomas L. & Schmidtke Daniel, 2019, “LADEC: The large database of English compounds”, Behavior Research Methods 51, 2152-2179.

Gagné Christina L., Spalding Thomas L., Spicer Patricia, Wong Dixie, Rubio Beatriz & Cruz Karen Perez, 2020, “Is buttercup a kind of cup? Hyponymy and semantic transparency in compound words”, Journal of Memory and Language 113.

Guevara Emiliano & Scalise Sergio, 2009, “Searching for universals in compounding”, in Scalise Sergio, Magni Elisabetta & Bisetto Antonietta (Eds.), Universals of Language Today, Dordrecht: Springer, 101-128.

Haider Hubert, 2001, “Why are there no complex head-initial compounds?”, in Schaner-Wolles Chris, Rennison John & Neubart Friedrich (Eds.), Naturally!, Torino: Rosenberg & Sellier, 165-174.

Jackendoff Ray, 2010, “The ecology of English noun-noun compounds”, in Jackendoff Ray, 2010, Meaning and the Lexicon. Oxford: Oxford University Press. 413–451.

Keuleers Emmanuel, Lacey Paula, Rastle Kathleen & Brysbaert Marc, 2012, “The British Lexicon Project: Lexical decision data for 28,730 monosyllabic and disyllabic English words”, Behavior Research Methods 44, 287-304.

Kousta Stavroula-Thaleia, Vinson David P. & Vigliocco Gabriella, 2009, “Emotion words, regardless of polarity, have a processing advantage over neutral words”, Cognition 112, 473-481.

Kuperman Victor, 2013, “Accentuate the positive: Diagnostics of semantic access in English compounds”, Frontiers in Language Sciences 4, Article 203.

Kuperman Victor, 2020, Modified and expanded norms for valence, arousal and dominance, Data set, https://osf.io/zj3u8

Libben Gary, Gagné Christina L. & Dressler Wolfgang U., 2020, “The representation and processing of compounds words”, in Pirrelli Vito, Plag Ingo & Dressler Wolfgang U. (Eds.), Word Knowledge and Word Usage, Berlin/Boston: de Gruyter, 336-352, also available at https://library.oapen.org

Lieber Rochelle, 2004, Morphology and lexical semantics, Cambridge: Cambridge University Press.

Lieber Rochelle & Štekauer Pavol, 2009, “Introduction: Status and definition of compounding”, in Lieber Rochelle & Štekauer Pavol (Eds.), The Oxford Handbook of Compounding, New York: Oxford University Press, 3-18.

Löbner Sebastian, 2013 [2002], Understanding semantics, London: Routledge.

Norris Dennis, 2013, “Models of visual word recognition”, Trends in Cognitive Sciences, 17(10), 517-524.

Paivio Allan, 1978, “The relationship between verbal and perceptual codes”, in Carterette Edward C. & Friedman Morton P. (Eds.), Perceptual Coding, New York/London: Academic Press, 375-397.

Paivio Allan, 2007, Mind and its evolution: A dual coding theoretical approach, New York: Lawrence Erlbaum Associates.

Plag Ingo, 2003, Word-formation in English, Cambridge: Cambridge University Press.

Scalise Sergio & Fábregas Antonio, 2010, “The head in compounding”, in Scalise Sergio & Vogel Irene (Eds.), Cross-Disciplinary Issues in Compounding, Amsterdam: John Benjamins, 109-125.

Schreuder Robert & Baayen R. Harald, 1995, “Modeling morphological processing”, in Feldman L. B. (Ed.), Morphological Aspects of Language Processing, New York: Lawrence Erlbaum Associates, Inc, 131–154.

Shaoul Cyrus & Westbury Chris, 2013, A reduced redundancy usenet corpus (2005-2011), Data set, Edmonton, AB: University of Alberta, http://www.psych.ualberta.ca/~westburylab/downloads/usenetcorpus.download.html.

Snefjella Bryor & Kuperman Victor, 2016, “It’s all in the delivery: Effects of context valence, arousal, and concreteness on visual word processing”, Cognition 156, 135-146.

Steiger James H., 1980, “Tests for comparing elements of a correlation matrix”, Psychological Bulletin 87, 245-251.

Warriner Amy Beth, Kuperman Victor & Brysbaert Marc, 2013, “Norms of valence, arousal, and dominance for 13,915 English lemmas”, Behavior Research Methods 45(4), 1191-1207.

Whelan Robert, 2010, “Effective analysis of reaction time data”, The Psychological Record 58(3), 475-482.

Wurm Lee H. & Fisicaro Sebastiano A., 2014, “What residualizing predictors in regression analyses does (and what it does not do)”, Journal of Memory and Language 72, 37-48.

Yao Zhao, Wu Jia, Zhang Yanyan & Wang Zhenhong, 2016, “Norms of valence, arousal, concreteness, familiarity, imageability, and context availability for 1,100 Chinese words”, Behavior Research Methods 49, 1374-1385.

York Richard, 2012, “Residualization is not the answer: Rethinking how to address multicollinearity”, Social Science Research 41(6), 1379-1386.

Top of page

Appendix

Appendix A. Nested models and meta-models: Statistical data

Table 22. Nested models for valence: Standardized regression coefficients from models using the significant predictors from the local models to predict English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.305***

-0.416***

-.408***

BNC frequency

-0.539***

stimlen

0.201***

-0.043

0.004

.271***

valence_cmp

-0.171***

-0.234***

-0.115***

valence_c1

valence_c2

0.091**

cnormsV_cmp

0.011

0.021

-0.052

-.037*

cnormsV_c1

-0.069*

-0.033

-0.033

-.044*

cnormsV_c2

0.018

-.044*

N

1256

985

1030

2376

adj. R-sq

0.186

0.223

0.335

0.270

AIC

-4168.924

-3699.738

-4045.808

-8399.62

AIC/N

-3.31921

-3.75608

-3.92797

-3.53519

*p < .05, **p < .01, ***p < .001

Table 23. Nested models for arousal: Standardized regression coefficients from models using the significant predictors from local models to predict English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.513***

-0.496***

-0.418***

BNC frequency

-0.483***

stimlen

0.187***

-0.023

0.040*

0.266***

arousal_cmp

arousal_c1

-0.010

0.023

-0.002

arousal_c2

0.034

cnormsA_cmp

0.046*

0.050*

-0.039*

cnormsA_c1

0.062**

0.030

0.064***

cnormsA_c2

0.015

0.058**

N

2100

1891

2268

2378

adj. R-sq

0.310

0.234

0.239

0.273

AIC

-6720.704

-6729.54

-7810.756

-8411.662

AIC/N

-3.20034

-3.55872

-3.44390

-3.53728

*p < .05, **p < .01, ***p < .001

Table 24. Nested models for concreteness: Standardized regression coefficients from models using the significant predictors from the local models to predict English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.503***

-0.484***

-0.403***

BNC frequency

-0.506***

stimlen

0.215***

0.008

0.072**

0.267***

concreteness_cmp

-0.040

-0.104**

-0.168***

concreteness_c1

-0.055*

-0.055*

concreteness_c2

0.037

0.080**

cnormsC_cmp

-0.034

-0.043

-0.010

cnormsC_c1

cnormsC_c2

0.042*

0.096**

0.050**

N

2084

1269

1462

2507

adj. R-sq

0.332

0.270

0.287

0.263

AIC

-6583.912

-4479.832

-5043.292

-8758.172

AIC/N

-3.15927

-3.53021

-3.44958

-3.49349

*p < .05, **p < .01, ***p < .001

Table 25. Meta-models: Standardized regression coefficients from models using all significant representation and context predictors from the nested models to predict English Lexicon Project (ELP) lexical decision (LD) times, British Lexicon Project (BLP) lexical decision times, and ELP naming times

Variable

ELP LD

BLP LD

BLP LD

ELP Naming

SUBTLEX frequency

-0.306***

-0.442***

-0.411***

BNC frequency

-0.565***

stimlen

0.191***

-0.038

0.021

0.270***

valence_cmp

-0.168***

-0.189***

-0.118***

valence_c1

valence_c2

0.062

cnormsV_cmp

-0.040*

cnormsV_c1

-0.041

-0.023

cnormsV_c2

-0.023

arousal_cmp

arousal_c1

arousal_c2

cnormsA_cmp

0.024

0.014

-0.072*

cnormsA_c1

0.052

0.046*

cnormsA_c2

0.058**

concreteness_cmp

-0.082*

-0.124**

concreteness_c1

-0.048

-0.077*

concreteness_c2

-0.008

cnormsC_cmp

cnormsC_c1

cnormsC_c2

0.063*

0.083*

0.058**

N

1205

623

666

2376

adj. R-sq

0.192

0.265

0.356

0.276

AIC

-3998.536

-2354.434

-2606.846

-8417.148

AIC/N

-3.31829

-3.77919

-3.91418

-3.54257

*p < .05, **p < .01, ***p < .001

Appendix B. Variables: Descriptions and sources

Table 26. Variables (Supplementary dataset)

Names

Descriptions

Source

id_master

ID

1

stim

Compound

1

c1

First constituent

1

c2

Second constituent

1

stimlen

Length of compound

1

correctParse

Correct parse? (1=yes, 0=no)

1

ratingcmp

Predictability rating

1

ratingC1

Meaning retention rating for first constituent

1

ratingC2

Meaning retention rating for second constituent

1

stim_SLlg10wf

SUBTLEX-US log word frequency per million for compound

1

BLPbncfrequency

BNC word frequency of the compound from the BLP

1

BLPbncfrequencymillion

BNC word frequency per million of the compound from the BLP

1

BLPbncfrequencymillion_LG

BNC log word frequency per million of the compound from the BLP

1

BLPrt

BLP lexical decision times

1

BLPrt_LG

BLP log lexical decision times

1

elp_ld_rt

ELP lexical decision times

1

elp_ld_rt_LG

ELP log lexical decision times

1

elp_naming_mean_rt

ELP naming times

1

elp_naming_mean_rt_LG

ELP log naming times

1

valence_stim

Representation valence for compound

2

valence_c1

Representation valence for first constituent

2

valence_c2

Representation valence for second constituent

2

arousal_stim

Representation arousal for compound

2

arousal_c1

Representation arousal for first constituent

2

arousal_c2

Representation arousal for second constituent

2

concreteness_stim

Representation concreteness for compound

3

concreteness_c1

Representation concreteness for first constituent

3

concreteness_c2

Representation concreteness for second constituent

3

cnormsV_stim

Context valence for compound

4

cnormsV_c1

Context valence for first constituent

4

cnormsV_c2

Context valence for second constituent

4

cnormsA_stim

Context arousal for compound

4

cnormsA_c1

Context arousal for first constituent

4

cnormsA_c2

Context arousal for second constituent

4

cnormsC_stim

Context concreteness for compound

4

cnormsC_c1

Context concreteness for first constituent

4

cnormsC_c2

Context concreteness for second constituent

4

Source datasets:

1. Gagné, Spalding & Schmidtke [2019]
2. Kuperman [2020]
3. Brysbaert, Warriner & Kuperman [2014]
4. Snefjella & Kuperman [2016] ■

Top of page

Notes

1 Warriner et al.’s [2013] dataset also included ‘dominance’ norms referring to the “degree of control” exerted by the stimulus word.

2 In the literature and in the present paper, the terms ‘norms’ and ‘ratings’ are used interchangeably to refer to the same values. In fact, the datasets referred to above contain norms obtained through averaging of native speakers’ ratings.

3 In Snefjella & Kuperman [2016], the term ‘content words’ is equivalent to the term ‘non-stopwords’. Stopwords correspond to the default English stopword list of the R tm-package (personal communication).

4 Also excluded were 493 words whose overall context values “were more than three standard deviations above or below the mean of the respective variable” [Snefjella & Kuperman 2016: 136].

5 The full list of norms can be found in the supplementary dataset of Snefjella & Kuperman [2016].

6 The text in this section was excerpted from Charitonidis [2014] with minor modifications.

7 Guevara and Scalise’s [2009] sample included Romance, Germanic, Slavic, and East Asian languages.

8 As Jackendoff [2010] reports, “there are some families of left-headed compounds in English, such as attorney general, mother-in-law, blowup, and pickpocket.”

9 Overviews of word recognition models can be found in Schreuder & Baayen [1995], Kuperman [2013], Norris [2013], and Snefjella & Kuperman [2016].

10 An independent (predictor or explanatory) variable is “the variable which an experimenter deliberately manipulates in order to observe its relationship with some other quantity, or which defines the distinct conditions in an experiment” (Collins English Dictionary, accessed 29 September 2022).

11 Log (=logarithmic) transformation is a method replacing each variable x with a log(x). In LADEC the log transformation to base 10 was used for both response times and word frequencies. Log transformation reduces, among others, the effect of very slow response times in the sample (outliers), etc. For details see Whelan [2010].

12 A dependent (outcome or criterion) variable is “the variable measured by the experimenter. It is controlled by the value of the independent variable, of which it is an index” (Collins English Dictionary, accessed 29 September 2022).

13 The SUBTLEX-US corpus is a 51-million-token corpus based on American English subtitles.

14 A control variable is “a variable that is considered to have an effect on the response measure in a study but that itself is not of particular interest to the researcher. To remove its effects a control variable may be held at a constant level during the study or managed by statistical means” (APA Dictionary of Psychology, accessed 29 September 2022).

15 Linear regression is a regression analysis in which the predictor or independent variables (xs) are assumed to be related to the criterion or dependent variable (y) in such a manner that increases in an x variable result in consistent increases in the y variable. In other words, the direction and rate of change of one variable is constant with respect to changes in the other variable” (APA Dictionary of Psychology, accessed 19 October 2022).
Multiple linear regression is a regression analysis in which a dependent variable linearly depends on two or more independent variables.

16 According to the descriptions in Balota et al. [2007: 446], “In the lexical decision task, participants are presented with a string of letters (either a word or a nonword, e.g., FLIRP), and are asked to press one button if the string is a word and another button if the string is a nonword. In the speeded naming task, participants see a visual word (or sometimes a nonword), and are asked to name the word aloud as quickly and as accurately as possible.”

17 Steiger’s [1980] z test showed that this difference was significant, z = 27.71, p < .0001 [Gagné et al. 2019].

18 By referring to previous research, Gagné et al. [2019] report that “the modifier (the first constituent in English) tends to play a larger role in the ease-of-relation selection during the processing of compounds and noun phrases.”

19 Collinearity in regression analysis is “the situation in which two independent variables are so highly associated that one can be closely or perfectly predicted by the other” (APA Dictionary of Psychology, accessed 29 September 2022). For several independent variables the term ‘multicollinearity’ is used.

20 Arguments against residualization in regression analyses can be found in York [2012] and Wurm and Fisicaro [2014].

21 The (unstandardized) regression coefficient (bi) “indicates the strength of relationship between a given predictor, i, and an outcome in the units of measurement of the predictor. It is the change in the outcome associated with a unit change in the predictor.” [Field 2009: 781]. The standardized regression coefficient (βi) “is the change in the outcome (in standard deviations] associated with a one standard deviation change in the predictor.” [Field 2009: 781].

22 In Table 5, the three morphological levels are indicated with ‘cmp’ (compound), ‘c1’ (first constituent), and ‘c2’ (second constituent). For details on the control variables ‘SUBTLEX frequency’ and ‘BNC frequency’, see sections 1.2 and 4.1.

23 Model log likelihood is an integral part of the AIC measure (see the lnL part of the AIC equation in section 4).

24 The term semantic dimension is used by the author for referring to valence, arousal, and concreteness with joint reference to representation and context norms.

25 This version was also used by Snefjella & Kuperman [2016] and Gagné et al. [2019].

26 Gagné et al. [2019] use the abbreviations “stim” and “cmp” for “compound” interchangeably.

27 Listwise exclusion (or listwise deletion) removes all cases with at least one missing value in one or more variables.

28 With reference to representation valence and concreteness, there were moderate positive correlations between both constituents and the compound (c1-cmp, c2-cmp). With reference to context norms, there were weak to moderate positive correlations between both constituents and the compound (c1-cmp, c2-cmp). Context valence and context arousal correlated moderately and negatively at the same morphological level (cmp, c1, c2) – the correlations between representation valence and representation arousal were weak and negative, again at the same morphological level.

29 The labels for the semantic transparency variables, i.e. “ratingcmp”, “ratingC1”, and “ratingC2”, were adopted from the LADEC dataset [Gagné et al. 2019].

30 The Steiger’s [1980] z two-tailed test was conducted using the package cocor for the R programming language as implemented at http://comparingcorrelations.org.

31 It should be noted that the second highest (weak to moderate) correlation was between representation concreteness for the compound and compound-based meaning predictability, i.e. r = 0.26, p < .001, see Table 11.

Top of page

List of illustrations

Title Figure 1. LADEC entries: sample
URL http://journals.openedition.org/lexis/docannexe/image/6769/img-1.png
File image/png, 91k
Title Table 10. Pearson correlations among representation norms of emotion (Brysbaert et al. [2014], Kuperman [2020]) and context norms of emotion (Snefjella & Kuperman [2016])
URL http://journals.openedition.org/lexis/docannexe/image/6769/img-2.png
File image/png, 156k
Title Table 11. Pearson correlations among representation norms of emotion (Brysbaert et al. [2014], Kuperman [2020]), and semantic transparency norms (Gagné et al. [2019])
URL http://journals.openedition.org/lexis/docannexe/image/6769/img-3.png
File image/png, 51k
Title Table 12. Pearson correlations among context norms of emotion (Snefjella & Kuperman [2016]) and semantic transparency norms (Gagné et al. [2019])
URL http://journals.openedition.org/lexis/docannexe/image/6769/img-4.png
File image/png, 59k
Top of page

References

Electronic reference

Chariton Charitonidis, “Context concreteness for the second constituent slows down compound-word processing”Lexis [Online], 20 | 2022, Online since 28 December 2022, connection on 05 December 2023. URL: http://journals.openedition.org/lexis/6769; DOI: https://doi.org/10.4000/lexis.6769

Top of page

About the author

Chariton Charitonidis

Independent researcher (ORCID iD: 0000-0003-0298-2629)
dr.chariton.charitonidis@gmail.com

Top of page

Copyright

CC-BY-SA-4.0

The text only may be used under licence CC BY-SA 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search