1In 2018, Clarivate Analytics, publisher of the Web of Science Journal Citation Reports (JCR), suppressed publication of the 2017 Journal Impact Factor (JIF) for three of four journals in the academic field of history of economics. Clarivate judged one of the journals, History of Economic Ideas (HEI), to be the “donor” of citations that distorted the impact factors of the European Journal of the History of Economic Thought (EJHET) and the Journal of the History of Economic Thought (JHET). The other journal, History of Political Economy (HOPE), was not included in that judgment.
2Built from citation counts, the JIF is commonly used in academic literature as an indicator of the influence of scholarly journals. Committees of appointment and promotion use it as a proxy of the importance of scholars’ published articles. The scholars themselves use it in choosing whose work to read, what to cite, and where to submit. Clarivate’s JIF suppression has thus elicited controversy and protest among historians of economics.
3It has also raised questions of scholarly concern. What is the Journal Impact Factor? What is it intended to measure—and does it in fact do that work? What substitute if any could do the work? This article introduces a symposium designed to address these and related questions from the historian of economics’ viewpoint. The questions arise from Clarivate’s 2018 JIF suppression but venture beyond it, from the role of review articles to using citation indexes to both classify and evaluate written works. The present authors, together with the six others whose ideas and analysis are given in the short articles here assembled, have a range of perspectives and answers.
4Our purpose in this introduction is to define the statistic at issue, summarize the controversy that gave rise to this symposium, and discuss some of the problems with the use of citation indexes and the JIF in historical context. We show how these problems pertain differently to the scholarly field of the history of economics than to economics in general, and in doing so frame this and the following five articles.
- 1 Academic journals publish a variety of items besides articles and review articles: book reviews, ed (...)
5Clarivate’s JIF is the specific form of a simple statistic in longstanding use. It was introduced by Eugene Garfield (1925-2017) as a tool for evaluating which journals to include in his Science Citation Index (now Clarivate’s Web of Science). The statistic counts the number of citations in all of the literature canvassed during a given “citation year” to items published in a journal within the previous two years, and divides it by the number of “citable items” (i.e., articles plus review articles, but not book reviews or other content) published in that same journal within the same time window.1 In Clarivate’s terms, the JIF for a journal in year t is:
- 2 The reader may be interested in Œconomia’s JIF. Although the journal’s documents have been indexed (...)
6Table 1 presents JIFs for the four history of economics journals available in the JCR: JHET, HOPE, EJHET, and HEI (2011-2018).2 For purposes of comparison—so far as comparison is appropriate, a question to be taken up shortly—it does the same for the so-called top five general economics journals: American Economic Review (AER), Quarterly Journal of Economics (QJE) Journal of Political Economy (JPE), Review of Economics and Statistics (REStat), and Review of Economic Studies (REStud).
Table 1. 2011-2018 JIFs: Four Historical and Top Five General Economics Journals
Sources: Journal Citation Reports (Clarivate Analytics) and authors’ estimates from Web of Science data for the three shaded cells
7The table shows much variation of JIFs over time for any given journal as well as variation among journals at any given time—especially between the historical journals and the top-five general ones, whose JIFs are higher by roughly an order of magnitude. Before delving into the causes of these variations, we draw attention to the shaded cells. They hold the explanation for Clarivate’s JIF suppression.
- 3 Citation stacking refers to groups of journals apparently working together to raise their JIFs.
- 4 For similar calculations, see Phil Davis’s blog post of 06/27/2018 in https://scholarlykitchen.sspn (...)
8The JIFs for all history of economics journals jumped considerably in 2017. Contributing to the jump, although differently for each journal, was a single review article published in HEI: “From Antiquity to Modern Macro: An Overview of Contemporary Scholarship in the History of Economic Thought Journals, 2015-2016” (Lange et al., 2017). The subtitle indicates well the content. It also indicates the reason for suspicion of what Clarivate (2017, 2-3) calls “citation stacking,” which produces “distortion” of the JIF statistic.3 Why else, one may justifiably ask, was the overview limited to 2015-2016, precisely the window for Clarivate’s 2017 JIF? An article published in the previous volume of HEI (Bianchi, 2016) was likewise a survey of the history of economics literature over the preceding two years, 2014-2015, but it had not the same scope or effect. Its bibliographic entries numbered 69; the 2017 article’s entries, at 212, were more than three times as numerous. They were responsible for 42 of the 66 citations counted in 2017 to JHET articles published in the 2015-2016 window (64%), 48 of 92 citations to HOPE (52%), 44 of 86 citations to EJHET (51%), and 1 of 13 citations to HEI (8%).4
9These were apparently the data in Clarivate’s view when it suppressed the 2017 JIFs for the JHET, EJHET and HEI—the first two as recipients of JIF-distorting citations, the third as donor. The reason for suppressing only those three historical journals’ JIFs, not HOPE’s, is murky. Clarivate did not explain in detail but released a policy statement on suppression and an annual list of suppressed journals including data that were implied to be relevant to the decision (Clarivate, 2017). The upshot is that, for an unspecified reason, Clarivate judged the increment to the 2017 citation count for HOPE to be less decidedly among that year’s “extreme outliers in citation behavior” (Clarivate, 2017, 3).
- 5 Nicola Giocoli (2019), HEI’s editor, relates the incident. For an analysis of JIF pressures in HEI’ (...)
10Consistent with Clarivate’s policy, the company waited a year and then reevaluated JHET, EJHET, and HEI with the new year’s data (Clarivate, 2017, 3). With reevaluation came release from purgatory, although Clarivate continued to suppress the three journals’ 2017 JIFs. It published all four journals’ JIFs for 2018—then HEI voluntarily exited the Web of Science in 2019.5 Thus 2018 is the last year for comparing JIFs across our four history-of-economics journals (hereafter “H4”).
11Table 1 shows that both the H4 and top five general economics journals (“G5”) conform to an upward JIF trend over the eight years in evidence. But variations about that trend, and the differences between the historical and general economics journals in their conformance to it, are telling of something more interesting for our purposes than the story of Clarivate’s suppression.
12Close inspection of the data in Table 1 finds greater variation about the upward trend for the H4 than for the G5. One could argue that the variation for the H4 is exaggerated by the purported citation stacking of 2017. The argument is easily accommodated. It is straightforward to calculate the 2017 JIF for each journal excluding from the numerator the citations by the HEI review article. Doing so deflates JHET’s 2017 JIF from 1.347 to 0.490; HOPE’s from 1.415 to 0.677; EJHET’s from 1.147 to 0.560; and HEI’s from 0.289 to 0.267.
13The data including these four substitutions are represented in Figure 1. The lower average level of JIF for the H4 than the G5 is accounted for by normalizing each journal’s annual JIF by its 2011 value (2011=100). Figures 1a and 1b show the time series of normalized JIFs for both sets of journals.
Figure 1. Normalized JIFs for Four Historical and Top Five General Economics Journals
14Figure 1 presents even more strikingly than Table 1 the greater variation of JIFs over time for the H4 compared to the G5. This difference of variation is owing partly to the different sizes of the scholarly communities engaged with the two sets of journals. It is owing decisively to the communities’ conversational forms, entailing different citation practices, which fit better or worse (or hardly at all) within the 2-year JIF window.
- 6 Lange et al. (2017)’s review article is an anomaly. Its reference list, 212 items long, fits entire (...)
15Table 2 summarizes information about the cited references in all articles and review articles for our two sets of journals (2011-2018). The H4 have about 1.5 times more cited references per article than the G5—but only a small percentage of them, from 2.6% to 4.1% (excluding the 2017 data), compared to percentages 3 to 4 times greater for the G5, are to works produced in the previous two years.6 That is partly because historians cite a lot of old primary literature as well as old secondary works. It is also because they believe the secondary literature that they do cite remains relevant longer than economists believe theirs does.
16In short, the two-year JIF window corresponds little to what historians of economics do. Whereas economists publishing in G5 journals may care about placing their research among competing literature, historians of economics care less. In a relatively small field with little presumption that new scholarship supersedes old, even a 15-year window falls short of capturing as much as the 5-year window does for G5 journals.
Table 2. Cited References (2011-2018): Historical Four (H4) vs. Top Five General (G5) Economics Journals
Source: own calculations using Web of Science data (November 5, 2019)
17It follows that history-of-economics JIFs are bound to be at once small and highly sensitive to work such as HEI’s surveys (more in Pinzón-Fuchs et al., this issue). What is more, they are bound to be highly variable. What meaning, then, may be attributed to them? If history-of-economics JIFs do not in themselves reflect historians’ citation practices—and if they are so variable as to have no stable relation to other statistics that may reflect historians’ citation practices, and for which the JIFs could therefore be used as proxies—then the answer is evidently: not much. To put the question concretely: when we find JHET’s JIF declining by an order of magnitude from 0.420 in 2011 to 0.047 in 2012, then rising again to 0.326 in 2013—in all of which years historians of economics maintained their habit of citing works almost entirely (> 95%) outside the JIF’s two-year window—what inference may be drawn about the journal’s “impact” in any of those years, or ever?
18Clarivate’s JCR produces a variety of indicators besides the JIF, including “total cites,” “5 year impact factor,” “immediacy index,” “eigenfactors,” “article influence scores,” and more. Even so, it is the JIF that is emphasized, fixing attention on what is observable through a 2-year window. There the attention stays for all journals and all fields. In this light, Clarivate’s Web of Science disclaimers about the care that is needed when comparing or otherwise interpreting JIFs across disciplines look like poor window dressing.7
19Controversy over the JIF precedes this symposium, of course. A lengthy “Special Discussion Issue on Journal Impact Factor” published in the journal Scientometrics (2012, Vol. 92, No. 2) includes analyses of multiple purported abuses and misunderstandings of the metric. The findings for scholarship in general anticipate those of this symposium for the history of economics. To wit, (i) because citation frequencies depend on many variables besides scientific merit (including even coding mistakes in the citation database), JIFs are dubious indicators of quality; (ii) they were created originally to assist librarians managing journal collections; (iii) they are incomparable across disciplines, as citation practices differ among fields and fit differently within the 2-year windows; (iv) for purposes of evaluating individual scholars, they are poor substitutes for reading one’s scholarly works.
20So far as historians of economics are concerned, there is more to consider about the JIF than definitions and trends as to interpretation and usage. The history of citation indexing more generally is relevant to our inquiry.
21Although historical surveys track the history of citation indexes back to legal writings from the 18th century or indexes of religious literature from the 12th (Smith, 2012), the immediate origin of the Science Citation Index (SCI) is found in the mid-1950s. It was then that Shepard’s Citations, a system of printed volumes for legal research, was presented to scientists as a method to help them “thread [their] way through the existing labyrinthine mass of printed materials” (Adair, 1955, 31).
22When Eugene Garfield introduced citation indexing to chemistry and genetics, he claimed that it would prove particularly useful for historical research “when one is trying to evaluate the significance of a particular work and its impact on the literature and thinking of the period” (Garfield, 1955, 109). The concept of “impact” was thus present at that earliest stage in the history of the SCI (now Clarivate’s Web of Science).
23Using citation data for historical research was one of Garfield’s first applications of the SCI (Garfield, 1963, 289). He designed computerized “topological network diagrams” showing chronological relationships between documents (not journals). Such “algorithmic historiography” was supposed to facilitate “the understanding of paradigms by enabling the scholar to identify the significant works on a given topic” (Garfield, et al. 2003, 400). Drawing from Thomas Kuhn, paradigms were represented by way of the “measurable impact” of their main elements—meaning, at this early time, not aggregate journal factors but the citation counts that would later constitute them:
We want to show where a particular topic began and identify both the bibliographic antecedents and descendants of its principal, often primordial papers and authors. Once these basic structural elements (papers and books) of the field are identified, they are ‘summarized’ graphically as an interconnected historiograph involving, typically, the 5% that are the most-cited. (Garfield, et al. 2003, 400-401, our emphasis)
- 8 For an analysis of the bibliometric approach to the history and sociology of science, including Eug (...)
24Figure 2 reproduces Garfield et al.’s (2003) “historiograph” for the paradigmatic shift in Garfield’s own scholarly domain: from citation indexing to bibliographic coupling to co-citation analysis (ibid., 405). Starting with M. Kessler (1963) and Garfield (1963), this historiograph also includes a complementary search for “outer references”: documents that do not cite Kessler (1963) or Garfield (1963), but that are frequently cited together with them.8
Figure 2. Historiograph. Dotted lines indicate “outer references”
Source: Garfield et al. (2003).
25It was in the process of expanding the SCI and then creating the Social Sciences Citation Index (SSCI, 1973) that Garfield (1972) produced the aggregate concept of “relative impact factor” for academic journals. This citation metric, included in the JCR beginning in 1975, controlled for size effects among scientific journals. For a reason that is interesting to check against current practices in particular fields, not least the history of economics, it also introduced the 2-year windows:
We have attempted to do this by calculating a relative impact factor—that is, by dividing the number of times a journal has been cited by the number of articles it has published during some specific period of time … . An analysis of the distribution has shown that the typical cited article is most heavily cited during the 2 years after its year of publication. (Garfield, 1972, 476)
26Specifically, Garfield found that between a fifth and a quarter of all references in the SCI-indexed literature were to articles 3 or fewer years old. The reader may observe in Table 2 how far the history of economics literature departs from that finding.
27Unlike historiographs, the use of JIFs burgeoned following Garfield’s (1972) first descriptions of his “relative impact factor.” Of course the evaluative function of citation indexes and the JIF burgeoned too, far beyond Garfield’s original intention of aiding librarians in their management of journal collections. That function has spread into fields including the history of economics. With its spread, it has affected the citation indexes from which the statistic is constructed, and the scholarly practices it is supposed to measure.
28The contributions to this symposium build upon the foregoing discussion arising from Clarivate’s 2018 JIF suppressions.
29In line with this introductory piece, the following article by Erich Pinzón-Fuchs, Cléo Chassonnery-Zaïgouche, and José Edwards uses the occasion to examine the role of review articles in the history of economics. Such articles have been more common in other academic fields, where they have served multiple purposes. Recent examples in the history of economics have appeared in survey series conceived by HOPE and HEI—which series differ in purpose, too, from one another. The authors discuss these series in light of the history of economics discipline and the practice of writing review articles.
30In “Down with High Citation Counts,” James Forder casts doubt on the view that highly-cited articles can be reliably presumed to be “worthy.” The greater is the number of citations, the more conspicuous is the cited article in citation indexes like the Web of Science, Google Scholar, or Scopus, and the higher the “impact” by conventional measures. But what does conspicuousness imply? Forder presents the case of one very highly-cited article: Milton Friedman’s “The Role of Monetary Policy” (1968). He compares it to another work by the same author, at about the same time, making very much the same argument, but with “a much higher quality presentation.” The case spotlights one of several reasons for scholarly attention that are conflated within citation indexes: citing an article as “tribal ritual.”
31Melissa Vergara Fernández’s article observes that the meaning of “impact” offered by Clarivate in marketing the JIF is no more than the operations used to calculate it. Using a philosophical theory of measurement, she discusses the significance of a metric of impact with such a purported meaning—and its limits. Her contribution suggests that several of the uses made of JIFs (and other citation-based metrics commonly used to evaluate history of economics literature) are not warranted, especially their use as a quality measures.
32The last two articles in this set discuss JIFs as the history of economics scholarly community perceives them. In “Understanding the Effects of Journal Impact Factors on the Publishing Behavior of Historians of Economics,” Jimena Hurtado and Erich Pinzón-Fuchs examine three channels through which practitioners in our field might perceive these effects: through impact on careers, impact on publication practices, and as sources of relevant information about research in the field. Their analysis of data collected from a survey of practitioners helps us to understand how historians of economics see and react to the prevailing “quantitative evaluation mania.”
33Finally, José Luís Cardoso explains “The Reduced Impact of Impact Factors on the History of Economics Community.” Historians of economics, he claims, are not so preoccupied as we may think with measuring the value of their contributions by means of impact factors, nor eager to sacrifice the quality assessment of their writings to any imposed metric rule. One of the reasons is precisely the modest performance of the history of economics as registered through current citations indexes. Another, however, is the publication culture of our scholarly community, involving considerable prestige and investment of effort in authorship of books and chapters in edited volumes. The development of new assessment instruments, which take into account books and also the use of online information management systems, are already encouraging new forms of communication—partly in consequence of the problems with factoring “impact” as examined in this symposium.
- 9 For early prominent responses to the Clarivate case, for different purposes from ours, see the JHET (...)
34What to make of these contributions on the whole? The reader may recall the critical responses to Clarivate’s JIF suppression from prominent members of our scholarly community in the early days of the case;9 the reader may also get an impression of a generally critical perspective on Clarivate in this symposium. The impression would not be mistaken, as far as it goes. But we hope to leave readers with an impression that is deeper and more varied, and to prompt a response that is more self-reflective. Criticism of Clarivate is not the main purpose of any contribution to this symposium. So far as it is one purpose, the authors form their critical points on different bases, whether philosophical, statistical, historical, or professional. Nor do their criticisms imply common answers to pertinent questions in the case. Was publication of the HEI article that troubled Clarivate an instance of citation stacking? Does the survey series to which that article contributed have merit? Is an impact factor in some form, if not the 2-year JIF, a useful measure? Is it useful for the purpose for which it is likely to be used?
35The reader expecting the unanimity of the early responses from history-of-economics journal editors and society presidents will not find it here. Our main purpose is different. It will be accomplished if the historical and methodological reflections offered in this symposium inform not only our scholarly community’s response in future cases but also—and what is more—our own research, citation, and editorial practices.
We thank Wade Hands, Jeff Biddle, Loïc Charles, and other participants at sessions of the History of Economics Society 2019 conference in New York City and the Allied Social Science Associations 2020 conference in San Diego for helpful comments. We thank Pedro Garcia Duarte, Tiago Mata, and anonymous referees for editorial and other suggestions that sharpened our work. The editorial team at Œconomia has been accommodating and constructive and we are grateful for their efforts.