Skip to navigation – Site map

HomeNuméros11-3Factoring “Impact” in the History...The Journal Impact Factor Might B...

Factoring “Impact” in the History of Economics

The Journal Impact Factor Might Be Useful But, for What, Precisely?

Le facteur d’impact des revues peut être utile, mais à quoi précisément ?
Melissa Vergara-Fernández
p. 473-484

Abstracts

In this paper I evaluate the Journal Impact Factor using a theory of measurement. To measure a concept adequately, our theory of measurement requires an adequate measurement strategy: correspondence between three steps—the characterisation of the concept, its representation, and the procedures followed to carry out the measurement—and fitness for a purpose. On this basis, I suggest that the uses given to the JIF as a measurement tool are unwarranted. The JIF does not have the machinery to adequately measure what it is usually taken to measure. The bottom line is not that the JIF ought to be eschewed. The bottom line is rather that an adequate measurement strategy is needed.

Top of page

Full text

1. Not Any Number Counts

1Quantification is in full swing. From the mundane, such as quantifying fitness with wristwatches, to policy-making, such as financing public development programmes conditional on measurable impact, quantification is part and parcel of our current way of life. Scholarship, including the history of economics, is no exception. While bibliometrics was used in the sixties and seventies as a tool by sociologists of science to study the dynamics of scientific practice, today it is prominent in policy decisions in academia (Gingras, 2014). Measures of the impact of scholarship determine publication outlets, funding decisions, and careers.

  • 1 See Edwards and Meardon, this volume, for details.
  • 2 The present article is not intended to take sides on this particular episode.

2There is much at stake, then, in decisions like the one taken by Clarivate to not publish the Journal Impact Factor (JIF), the most prominent measure of journal impact, of three of the four history of economics journals for the year 2017. Clarivate claimed there had been “citation stacking”; one journal, the History of Economic Ideas (HEI) “donating” citations to the other two, the Journal of the History of Economic Thought (JHET) and The European Journal of the History of Economic Thought (EJHET).1 Decisions such as Clarivate’s tarnish the reputation of journals, affect funding decisions and academic careers. Scrutiny of the JIF and other measures of scholarly impact is in order.2

3To be sure, pages upon pages have been written about the JIF. Scholars from various disciplines have criticised the JIF as a poor measure. Some have argued that average citations may grossly misrepresent the actual citation record of the majority of articles published by a journal (e.g. Editorial, 2005; Leydesdorff et al., 2016). Others have pointed out that different fields have different citation practices that do not conform to the one assumed by the JIF—i.e. the two-year window (see Edwards and Meardon, this volume). Still others have levelled criticism against the JIF for generating perverse incentives for the academic community such as journal self-citations and citation cartels (e.g. Brembs et al., 2013; Moustafa, 2015; Perezet et al., 2019).

4In this paper, I shall suggest that the uses given to the JIF as a measurement tool are not warranted. The JIF does not have the machinery to adequately measure what it is usually taken to measure. My purpose, however, is not merely to provide more grist to the critics’ mill. While the large bibliometric literature has mostly focussed on the marrow of the statistical and mathematical methods to measure scholarly impact, less attention has been given to the philosophical question of what adequate measurement is. I will address this question here.

5The relevance of this question ought not to be underestimated. As I will argue in the remainder of the paper, adequate measurement hinges on a set of requirements matching the purposes for why one measures. This implies that, before we can have a fruitful discussion about whether the JIF is appropriate for fields such as the history of economics, clarity and explicitness are needed about what adequate measurement of scholarly impact is. Not any number counts.

2. A Theory of Measurement

6Although philosophers have found it difficult to give an unequivocal definition of measurement, many agree that it is an activity involving the interaction with a concrete system with the aim of representing some of its features in abstract terms, such as in classes, numbers, or vectors (Tal, 2015). The theory of measurement introduced by Cartwright et al. (2017) and Cartwright and Runhardt (2014) is helpful to understand the problems with using the JIF for measuring impact. According to this theory, three requirements have to be fulfilled in order to measure adequately, meaning that the abstract terms correspond systematically with the concrete system of interest. First, the concept of interest has to be characterised. Second, the way in which the concept is represented has to be defined. Third, a set of procedures has to be established to make sure that the tokens—the elements we are interested in measuring—that are picked out by the concept, are really the ones intended to be picked out. I will discuss these requirements in turn.

  • 3 Cartwright and Runhardt (2014) call these concepts with fuzzy boundaries Ballung, a German word for (...)

7When we measure, we are interested in concepts that pick out qualitative or quantitative properties that individuals or populations have. Some concepts pick out properties that are easily observable, like age or income level. Other concepts, like civil war or journal impact, are more elusive: they have fuzzy boundaries.3 Nature does not help us to neatly distinguish them, it is our interest in them that does. But this often is not neat. There is not anything in nature or anywhere else that tells us when an alcoholic has become one, when climate change set in, or when a journal has had impact. We select the criteria by which we decide whether a token is picked out by our concept. These criteria are varied and complex.

  • 4 Note that this is not only about classification. For instance, the criterion used to define tempera (...)

8Therefore, the first requirement for adequate measurement is that we set the boundaries of our concept as clear as possible. Take the example discussed by Cartwright and Runhardt (2014): “civil war”. Social scientists have commonly taken four criteria to define it: internal fighting, active government involvement, appreciable amount of force applied by the involved parties, and a certain number of deaths resulting from the fighting. Tokens—countries, in this case—that fulfil the four criteria are picked out by our concept “civil war”. If we were to add that, say, “involved parties are mostly women”, then “civil war” would become another concept. In turn, we would assign other countries—probably none—to our newly created concept.4

9The second requirement is that the formal representation of the concept reflects and is warranted by our characterisation of the concept. Take “civil war” again. Provided that we have defined it as presence of the four aforementioned criteria, we can represent it as a binary variable. We can only tell whether Colombia or the Netherlands is experiencing civil war. To track the severity of Colombia’s civil war throughout the years, a different representation, namely a continuous variable, would be needed.

10Several kinds of representation are used in science. The most common are the nominal, which assigns different numbers (or letters) to the tokens; ordinal, which ranks tokens; interval, which orders tokens on a scale with equal intervals; and ratio, which orders tokens on a scale with equal ratios and a true-zero point.

11For concepts that have fuzzy boundaries, there are three common strategies for representation (Cartwright and Runhardt, 2014). The best is to construct tables of indicators. Since none of the criteria by which we want to define a concept can be singled out as essential, we must consider them all. Like with members of a family, they may share several traits, but not all have skin problems or crooked noses. Civil war would be less precise if we were to characterise it only by “presence of internal fighting”, as per above. The Netherlands would then have a civil war, too, by dint of the Dutch gangs shooting at each other in competition for the illicit drugs market. The downside of tables of indicators is that comparisons across tokens or time cannot be carried out.

12An alternative strategy is to pare down the concept until it is so parsimonious that we can agree on a single criterion to define it. Something like this has been done with the concept of “race” in different medical contexts (Efstathiou, 2012). Epidemiologists care about race in terms of regional heritage that may be associated with risks of diseases, whereas geneticists do in terms of genetic polymorphisms.

13The third strategy is representation as an index—a compromise between the other two. Indices keep the variety of criteria embraced by tabular representation but weigh them according to some rule. This facilitates comparison. An example is the Human Development Index, which takes per capita income, life expectancy, and education as salient criteria of human development and gives each a weight.

14The final requirement is that the procedures followed to carry out the measurement correspond to the characterisation and representation of the concept. The procedures are the methods used to find out which tokens belong in the categories. For measuring civil war, the procedures involve, among others, determining how the number of casualties are counted.

15Arriving at correct procedures often involves reconsidering characterisation and representation. We might characterise a concept in one way but then discover that, in practice, the procedures necessary to do justice to the concept are cumbersome or too expensive to carry out. This often requires moving back and forth among the three requirements, reconsidering and adjusting. Hereby we make sure that the empirical system and the abstract terms by which we represent it correspond.

3. An Assessment of the JIF

  • 5 See Edwards and Meardon (this volume) for details about these operations.

16I have said that characterising a concept involves determining its boundaries by identifying criteria to pick out tokens. In the case of “journal impact” this would amount to selecting the criteria to determine what counts as impact. As to the JIF in particular, Clarivate defines the JIF by means of the operations required to calculate it: “The average number of times articles from a journal published in the past two years have been cited in the Journal Citations Report (JCR) year” (Clarivate, 2020). Specifically, it is calculated as follows:5

JIFt = number of times articles published by journal x in t-1 and t-2 were cited by indexed journals in t/total number of articles published by journal x in t-1 and t-2.

17Another way to state this definition is that there are no criteria to define “journal impact” besides what the operation establishes. This is how Clarivate defines the JIF on their website (Garfield, n.d.) and other documents such as King (2017).

18Justification for this definitional strategy can sometimes be found in the philosophical position known as operationalism. Percy Bridgman, its proponent, stated that “a concept is synonymous with the corresponding set of operations” (Bridgman, 1927, quoted in Tal, 2015), arguing that nothing else is needed for definition. Such a strategy is the most extreme version of a pragmatic perspective towards measurement: there are no facts of the matter as to which operations or procedures truly measure a specific quantity (Tal, 2015). An implication of this perspective is that a concept cannot have more than one set of operations; strictly speaking, another set of operations would define a different concept.

19If a concept is defined by the corresponding set of operations, the three requirements of our theory are tightly aligned. The concept has been characterised—to wit, by criteria that are procedures. And the representation of the concept has been decided in terms of those same procedures. The JIF thus appears to fulfil the requirements of our theory.

20There are two problems, however. First, purposes matter. We measure not just because we are interested in an indiscriminate accumulation of facts. We measure for some greater purpose. Otherwise, we would count our hairs and the weight of our clipped nails. The measurement strategy, the fulfilment of the three requirements, has to be fit for that purpose. The implication is that, before we even start thinking about how to characterise, represent, and define procedures to measure a concept, first we have to determine for what purpose. In the case of the JIF, it is not clear what that purpose is. The only hint we have is that, in a precursory incarnation of the JIF, although the procedures were very similar, the purpose for which the metric was devised was markedly different. In the absence of an explicit and clear purpose, we have at least prima facie reason to conclude that the JIF's requirements for measurement fulfil another purpose altogether.

  • 6 In 1955, Garfield referred to this system as determining “journal impact”. But, as per above, it wa (...)

21This is the story. Eugene Garfield, the creator of the Science Citation Index (SCI), first used a “Journal Impact Factor” to select the journals that were to be included in the SCI (Garfield, 2006). The SCI was meant to allow users of scientific literature to track how ideas travelled: who and in which field a particular paper had been cited. This way, a reader would be able to tell if work cited as authority was valid. To this end, Garfield was suggested to employ the ‘citator’ system used in law since 1873, published as the Shepard’s Citations (Garfield, 1963). This system allows lawyers, looking to find authoritative cases for a case, to consult all the subsequent cases that have cited a particular case of interest (Adair, 1955). In addition to tracking how ideas travel, an analogous system for science would also be more indicative of the significance of a particular work in the literature than the absolute number of publications of a scientist (Garfield, 1955, 109).6

22A crucial difference between a citation index such as the Shepard Citations and one for science was—and continues to be—the volume: when Garfield first wrote about the SCI in 1955, the order of magnitude between publications in science and law was from fifty to a hundred times greater in science (Garfield, 1955). Therefore, the journals to be included in the SCI had to be selected. This was the purpose the “Journal Impact Factor” served back then.

23Already in 1927, Gross and Gross (1927), without calling it “journal impact”, had used citation count as a selection method. In their case, it was about the journals a library should purchase. They were addressing a challenge that, at the time, was arising for small colleges in the USA. Colleges aimed to prepare students for specialised graduate programmes while simultaneously imparting cultural education. Colleges also hired faculty with PhDs, who require access to journals for their research. The problem Gross and Gross were trying to solve was thus: What journals satisfying the needs of students and faculty should college libraries acquire, given their financial restrictions? The method they proposed consisted in taking an influential journal as a base, noting all the journals cited therein, and arranging them in order of the frequency with which they were cited. They also tabulated these results in subperiods of time to highlight how citation patterns can change over time. This, too, was to be considered when making purchasing decisions.

24Their purpose was clear. To offer “an arbitrary standard of some kind by which to measure the desirability of purchasing a particular journal” (Gross and Gross, 1927, 386). Thereby they defined ‘desirability of purchase’ by the number of citations and represented it in an ordinal scale. As such, it was not meant to convey any inherent information about journals. The same holds for Garfield in his use of the “Journal Impact Factor” as a selection tool for the SCI. By contrast, as the JIF is currently used, it is taken to convey information not only about the journal, but often, too, about the authors who publish in them. Without a clear explicit purpose there is little reason to presume that the JIF can say something inherent about journals.

25Indeed, the second problem of the JIF is that operational definitions, instead of being motivated by pragmatism, are often used when understanding of the concept and knowledge of alternative features that might capture the concept are deficient (Cartwright et al., 2017). In addition, operationalisation makes knowledge accumulation difficult. This may happen because the operations stand in for the explicit criteria by which tokens are picked out, rendering the operations unjustified. This hampers generalisability.

26This seems to be the case with the JIF. Clarivate’s operational definition does not provide clues about what “impact” is beyond mere average citations. There are no substantive criteria that justify the operations that define it. For instance, why a two-year and not a three-year window? As Edwards and Meardon (this volume) point out, Garfield (1972) justifies the two-year window only as the result of an analysis of the distribution of a selection of journal citations. A significant number was published in the two previous years. But this is not a substantive criterion. It is an empirical finding about the pattern of citations in journals canvassed by the Science Citation Index over half a century ago, before the Social Sciences Citation Index existed. The finding may or may not continue to hold at present, or in the future, for all subsets of the population in question, let alone for other populations. The population in question for this symposium’s purposes is history of economics publications. There the finding decidedly does not hold; historians of economics cite over much longer spans of time.

  • 7 The h-index measures the cumulative impact of an authors output; the eigenfactor score measures the (...)
  • 8 I thank Stephen Meardon for pointing this out and providing the example that follows.

27Correlations of the JIF with other metrics are often used to suggest that this is evidence of their being able to track impact, whatever that might be. Mingers and Yang (2017), for instance, have found that for journals in business and management, the JIF and other metrics such as the h-index, eigenfactor score, and SNIP are highly correlated.7 Yet, they also point out that some journals’ ranking can change by over a hundred places depending on which metric is used. The same holds for history of economics journals.8 Consider the group of three including the Journal of the History of Economic Thought (JHET), History of Political Economy (HOPE), and The European Journal of the History of Economic Thought (EJHET). Despite similarly high correlation, between 2014 and 2015 HOPE’s JIF grew by half while JHET’s more than quadrupled, making the JHET the top-ranked journal among the three. But over the same year JHET’s eigenfactor grew only by a third, leaving it as the third-ranked journal behind both HOPE and EJHET; and although JHET’s SNIP, too, grew only by a third, HOPE’s did not budge, so by that metric JHET became the top-ranked journal anyway. Only by knowing what determines impact can we make any sense of this kind of anomaly. Insofar as this knowledge is lacking, all we can say about how to increase the impact of scientific research is to publish in high impact journals. This is, of course, absurd.

  • 9 Naturally, “quality” is not without its problems; there are many desiderata for quality.

28Many take the JIF to be associated with quality—e.g. Garfield (2006); Leydesdorff et al. (2016); Mingers and Yang (2017).9 Intuitively this makes sense: if science is humanity’s greatest epistemic achievement, a metric that has become the norm must somehow track whatever it is that makes science valuable. Indeed, a similar argument has been previously used to defend the use of the JIF (Garfield, 2006; Hoeffel, 1998). And, although officially Clarivate shuns from associating the JIF with quality, in one of their informational videos, the voiceover says “Journal Impact Factor scores help you compare journals to assess the relative quality of different publications(Web of Science Training, 2017, emphasis added).

  • 10 Forder (this volume) seems to make exactly the opposite claim. Namely, that citations closer to Fri (...)

29The trouble is, however, that the JIF’s operational definition does not warrant its association with quality. There are two challenges. First, the JIF gives prevalence to the short-term publication record; what Leydesdorff et al. (2016) call the research front. They make a distinction between this short-term research front and long-term processes. They suggest that the research front tends to involve transitory knowledge-claims whereby researchers inform one another about progress. These knowledge-claims reflect involvement in current discourse. By contrast, in the long-term, knowledge-claims become codified into large bodies of knowledge. The suggestion is not that quality is only to be found in the established bodies of knowledge. Rather, that we have no a priori reason to presume that research quality is only associated with short-term high average citation frequency, as journal impact does. If anything, long-term processes are, prima facie, better indicators of quality: they have passed the test of time.10 (If one were to think of scholarly communities that put weight on long-term processes for the determination of what they consider “impact,” one might well come up with the history of economics.)

30Second, Brembs et al. (2013) provide evidence that the JIF can be negatively correlated with quality in some fields. They assess quality in terms of reliability. There are two well-known phenomena in publication patterns that lead to the negative correlation. One is publication bias, the phenomenon that novel and surprising results are more likely to be published than the replication of known results. The other is the decline effect, which is that published effect sizes tend to decline with time. The first time a causal relation is established, the effects published tend to be larger than in the attempts that replicate the effect. These two phenomena and the fact that initial findings tend to occur in high-impact journals suggest that the effects published in some high-impact journals may be overestimated. If this is so, they are less reliable. The implication is that the JIF does not systematically assign high values to high-quality journals and low values to low-quality journals across all disciplines. But, as I argued above, this systematic correspondence between our system of interest and our abstract system is precisely what we want to achieve when we measure.

31Let us now take stock. JIF fulfils the requirements of our theory. Fine. But our theory tells us the alignment of our three requirements have to be fit for a purpose. Unlike the first incarnations of the JIF, the current purpose is, at best, unclear. The JIF is so thinly characterised that it fails to warrant any meaningful interpretation of impact. And as a measure of quality, at least in terms of reliability, it is a poor one.

4. What is to Be Done?

32“Journal impact” is a fuzzy concept. The reason for this, again, is that some concepts are best characterised by a set of criteria, none of which is essential. There are many we have reason to regard as being related to impact in academia: breakthrough ideas, generation of the most benefit for society, or influence on public policy. Many more are possible. Gingras (2014) has discussed and motivated some, too.

33As a fuzzy concept, “journal impact” would be best characterised and represented as a table of indicators that includes the criteria we in academia or, in fact, in each academic subfield, care about that are related to the impact of our scholarship. Should that strategy fail, perhaps because we do want to be able to make comparisons—for funding decisions, perhaps—we may opt for selecting a single naked criterion. Or an index, if we want to compromise. But that needs to come from a prior conscious and explicit intent of measuring for a purpose. Then we have to make sure that our three requirements are fulfilled, given this purpose.

34To be sure, I am not suggesting that we can only start measuring once we have perfectly discerned how it is to best characterise and represent our concepts and established the procedures accordingly. This is clearly not consistent with the history of science. It took nearly two centuries to settle on the freezing and boiling points of water as fixed points in thermometry—in 1701 Isaac Newton proposed the melting point of snow and blood heat as candidates (Chang, 2007, Chapter 1). Usually, going back and forth between the requirements is necessary to reach a satisfactory measuring strategy. And this is precisely the point. Finding out a measurement strategy that is adequate for our purposes is the result of our desire to improve our standards. Chang (2007) has described a similar idea as epistemic iteration. This is “a process in which successive stages of knowledge, each building on the preceding one, are created in order to enhance the achievement of certain epistemic goals” (Chang, 2007, 45). This is how temperature was invented. My suggestion is thus that, if we are actually interested in measuring the impact of our scholarship, first we have to establish the purposes for why we do it. Then we need to devise a measuring strategy fit for these purposes. Otherwise we are fooling ourselves.

I would like to thank Boudewijn de Bruin and the SOM Research Institute at the University of Groningen for the financial support to attend the HES conference in New York to present a preliminary version of this work. I also thank the audience at the HOPE Center seminar and Kevin Hoover in particular for their helpful comments.

Top of page

Bibliography

Adair, William C. 1955. Citation Indexes for Scientific Literature? American Documentation, 6(1): 31-32.

Brembs, Björn, Katherine Button, and Marcus Munafò. 2013. Deep Impact: Unintended Consequences of Journal Rank. Frontiers in Human Neuroscience, 7(2013).

Cartwright, Nancy, Norman Bradburn, and Jonathan Fuller. 2017. A Theory of Measurement. In Leah McClimans (ed.), Measurement in Medicine: Philosophical Essays on Assessment and Evaluation. London and New York: Rowman and Littlefield, 73-88.

Cartwright, Nancy and Rosa Runhardt. 2014. Measurement. In Nancy Cartwright and Eleonora Montuschi (eds), Philosophy of Social Science: A New Introduction. Oxford: Oxford University Press, 265-287.

Chang, Hasok. 2007. Inventing Temperature: Measurement and Scientific Progress. Oxford: Oxford University Press.

Clarivate. 2020. Journal Impact Factor (JIF). https://incites.help.clarivate.com/Content/Indicators-Handbook/ih-journal-impact-factor.html.

Editorial. 2005. Not-so-deep Impact. Nature, 435: 1003-1004.

Efstathiou, Sophia. 2012. How Ordinary Race Concepts Get to Be Usable in Biomedical Science: An Account of Founded Race Concepts. Philosophy of Science, 79(5): 701-713.

Garfield, Eugene (n.d.). The Clarivate Analytics Impact Factor. Web of Science Group. https://clarivate.com/webofsciencegroup/essays/impact-factor/ [retrieved 23 October 2020].

Garfield, Eugene. 1955. Citation Indexes for Science. Science, 122(3159): 108-111.

Garfield, Eugene. 1963. Science Citation Index. Science, 144(3619): 649-654.

Garfield, Eugene. 2006. The History and Meaning of the Journal Impact Factor. JAMA, 295(1): 90-93.

Gingras, Yves. 2014. Criteria for Evaluating Indicators. In Blaise Cronin and Cassidy R. Sugimoto (eds), Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact. Cambridge, MA: MIT Press, 109-125.

Gross, Paul L. K., and Edward M. Gross. 1927. College Libraries and Chemical Education. Science, 66(1713): 385-389.

Hoeffel, Christine. 1998. Journal Impact Factors. Allergy, 53(12): 1225-1225.

King, C. 2017. Journal Citation Reports: A Primer on the JCR and Journal Impact Factor. Clarivate Analytics. https://clarivate.com/blog/science-research-connect/journal-citation-reports-new-primer/ [retrieved August 28th 2021].

Leydesdorff, Loet, Lutz Bornmann, Jordan A. Comins, and Stasa Milojević. 2016. Citations: Indicators of Quality? The Impact Fallacy. Frontiers in Research Metrics and Analytics, 1.

Mingers, John and Liying Yang. 2017. Evaluating Journal Quality: A Review of Journal Citation Indicators and Ranking in Business and Management. European Journal of Operational Research, 257(1): 323-337.

Moustafa, Khaled. 2015. The Disaster of the Impact Factor. Science and Engineering Ethics, 21(1): 139-142.

Perez, Oren, Judith Bar-Ilan, Reuven Cohen, and Nir Schreiber. 2019. The Network of Law Reviews: Citation Cartels, Scientific Communities, and Journal Rankings. The Modern Law Review, 82(2): 240-268.

Tal, Eran 2015. Measurement in Science. In Edward N. Zalta (ed.), Stanford Encyclopaedia of Philosophy. https://plato.stanford.edu/archives/fall2017/entries/measurement-science/.

Web of Science Training. 2017, October 13. Journal Citation Reports. Journal Impact Factor. Youtube, https://www.youtube.com/watch?v=VJc3PC697ocandlist=PLyh-Yuqjd7yqRabcyeChfycIdoVXgxyFI [retrieved August 28th 2021].

Top of page

Notes

1 See Edwards and Meardon, this volume, for details.

2 The present article is not intended to take sides on this particular episode.

3 Cartwright and Runhardt (2014) call these concepts with fuzzy boundaries Ballung, a German word for a concentrated cluster. These are concepts that are characterised by family resemblance between individual members, rather than by a specific common property.

4 Note that this is not only about classification. For instance, the criterion used to define temperature is “manifestation of thermal energy”. Naturally, since thermal energy is present in all matter, there are no tokens to exclude, but they are picked out on the basis of this criterion and measured accordingly.

5 See Edwards and Meardon (this volume) for details about these operations.

6 In 1955, Garfield referred to this system as determining “journal impact”. But, as per above, it was a qualitative system, that would convey information about individual papers.

7 The h-index measures the cumulative impact of an authors output; the eigenfactor score measures the number of times articles from a journal published in the past five years have been cited in the JCR year; and the SNIP measures the contextual citation impact by weighing citations based on the total number of citations in a subject field.

8 I thank Stephen Meardon for pointing this out and providing the example that follows.

9 Naturally, “quality” is not without its problems; there are many desiderata for quality.

10 Forder (this volume) seems to make exactly the opposite claim. Namely, that citations closer to Friedman (1968) had a better clue of what Friedman was arguing than later papers, somehow suggesting the superiority of citations closer in time. This need not be a contradiction. Rather, it highlights the fact that there are arguments to be made for favouring both the long and the short term. It demonstrates that i) the preferred two-year window of JIF is far from obvious; ii) different fields have different citation needs and habits, which makes the JIF’s two-year window even more contentious.

Top of page

References

Bibliographical reference

Melissa Vergara-Fernández, The Journal Impact Factor Might Be Useful But, for What, Precisely?Œconomia, 11-3 | 2021, 473-484.

Electronic reference

Melissa Vergara-Fernández, The Journal Impact Factor Might Be Useful But, for What, Precisely?Œconomia [Online], 11-3 | 2021, Online since 01 September 2021, connection on 21 September 2024. URL: http://journals.openedition.org/oeconomia/11593; DOI: https://doi.org/10.4000/oeconomia.11593

Top of page

About the author

Melissa Vergara-Fernández

Erasmus University Rotterdam, info@mvergarafernandez.nl

Top of page

Copyright

CC-BY-NC-ND-4.0

The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search