Navigation – Plan du site

AccueilNuméros11-3Factoring “Impact” in the History...Understanding the Effects of Jour...

Factoring “Impact” in the History of Economics

Understanding the Effects of Journal Impact Factors on the Publishing Behavior of Historians of Economics

Les effets des facteurs d’impact des revues sur les habitudes de citation des historiens de l’économie
Jimena Hurtado et Erich Pinzón-Fuchs
p. 485-496

Résumés

A travers un sondage réalisé en 2019, nous nous proposons d’évaluer les conséquences des facteurs d’impact sur les pratiques de recherche de la communauté des historiens des sciences économiques et de la pensée économique. Ce premier regard systématique sur l’effet possible de ces indicateurs quantitatifs sur la communauté de recherche montre que les facteurs d’impact ne sont pas considérés comme une source d’information sur la qualité et l’originalité d’articles mais qu’ils montrent la centralité et la visibilité d’une revue. Les enquêtés rapportent des effets limités sur leur propre carrière et leurs pratiques de recherche, mais ils notent un plus fort effet sur la carrière et les pratiques de leurs collègues. Les possibles effets des facteurs d’impact, selon les enquêtés, seraient négatifs, notamment pour la visibilité du champ, et en raison du fait que les facteurs d’impact incitent à adopter des comportements stratégiques entre leurs collègues qui ne sont pas nécessairemment reliés à la qualité de la recherche.

Haut de page

Texte intégral

  • 1 Casadevall and Fang (2014) define and trace “Impact Factor Mania” as the obsession that life scienc (...)
  • 2 See the Introduction to this Symposium.

1The quantified academic evaluation mania has had consequences to a greater or lesser extent on all fields and disciplines all over the world. The Impact Factor (IF) and the h-index are considered nowadays as the main metrics to evaluate impact (Tregoning, 2018)1. Therefore, some influence on scholarly practices is to be expected (Casadevall and Fang, 2014) even with all the shortcomings these metrics and their use might have2. In order to have a sense of this effect in the scholarly community of historians of economic thought, we conducted an anonymous online survey among historians of economics. As Krücken and Meier (2006) put it, despite the global character of these evaluation systems, there are local specificities that produce a kind of “creative deviation from a given path” of universalizing and global models. We want to understand these local specificities for the community of historians of economics, whose members are located in different geographical locations and institutional situations. We constructed the survey to inquire about the impact of these measures on the field, considering several possible ways in which it could take place: careers, publication practices, and sources of information. The questions were formulated so we could have an idea of respondents’ perceived effect of IFs on their careers in terms of promotion or access to funding, on their own publication practices and those of their colleagues, and on the information they use to assess different dimensions allegedly associated with quality of research. The results on this last dimension can also provide a sense of how some members of this scholarly community perceive the claims found in the introduction of this symposium about the dubious character of these metrics as indicators of quality or as poor substitutes for evaluating individual scholars.

  • 3 The SHOE (Societies for the History of Economics) list “is a moderated forum, sponsored by the Hist (...)

2Between January 10 and February 13, 2019, we conducted an online survey with 45 questions, mostly multiple-choice with some open answers, through Google Forms (https://goo.gl/​forms/​hjJR7DZAJV818ift1), which we shared through the “SHOE”3 mailing list, the mailing lists of three important history of economics centers in France (Triangle, PHARE, and REhPERE), and a mailing list of other scholars drawn from members of the Latin American Association for the History of Economic Thought (ALAHPE) and the Iberian Association for the History of Economic Thought (AIHPE). The second and third mailing lists mostly overlap with the first, so many scholars received the invitation more than once. We exclusively targeted these lists to elicit answers from historians of economics. 101 people answered our survey, which is about 9% of the suscribers of SHOE. Since our sample was not chosen in a randomized way, and people instead “volunteered” to answer the questions, it is difficult to say whether this sample actually yields representative and robust results. However, the general socio-demographic information of the respondents seems to replicate the most common characteristics of the academic community of historians of economic thought considering the composition of professional associations such as the European Society for the History of Economic Thought (ESHET) and the History of Economics Society (HES). Most respondents identify as masculine (75,2%), are older than 45 years (59,4%), are located mostly in France, the United States and Italy (52%), studied for their Ph.D. in the same countries (50%), are Full or Associate Professors (64%), belong on average to 2,4 professional associations mostly ESHET and HES. Contrasting the number of respondents who reported being members of at least one of five history of economics associations with the 2018 and 2019 conference programs of HES and ESHET, with between 200 and 280 participants, and the average number of members of HES between 2015 and 2019 of 250, the sample could be estimated to be near 25% of the community. There is, no doubt, a self-selection bias that might be overestimating the negative effects of these metrics on scholarly practices because of two reasons: first, the survey was conducted not that long after Clarivate Analytics (2018) informed their decision to exclude The European Journal of the History of Economic Thought (EJHET), History of Economic Ideas (HEI) and The Journal of the History of Economic Thought (JHET) from its Journal Citation Report with the consequent actions and communications that those journals’ editors, members of the scholarly community and members of the governing bodies of ESHET and HES sent to Clarivate Analytics and made public using, among other channels, the SHOE mailing list; second, it is possible that the people who took the time to answer the survey dislike these metrics and took the opportunity to express their negative opinion. Therefore, we have been very careful in interpreting our results, and have given special attention to the qualitative information provided in some answers. Table 1 summarizes characteristics of respondents.

Table 1. Survey Data (in percentage)

Female

Male

Other

Total

Observations

23

(22,8%)

76

(75,2%)

2

(2%)

101

(100%)

Age

< 44 years

39,1

39,5

100

40,6

> 45 years

60,9

60,5

0,0

59,4

Fields

Economics

70,8

84,2

100

82,2

STS, Soc, Hist. & Phil.

16,7

14,5

0,0

13,9

Other

12,5

1,3

0,0

3,9

Position

PhD or Postdoc

4,3

7,9

0,0

6,9

Tenure track

17,4

14,5

0,0

14,9

Tenured

73,9

61,8

50,0

64,4

Other

4,4

15,8

50,0

13,9

Current Location

Europe

82,6

60,5

50,0

65,4

North America

8,7

22,4

50,0

19,8

Latin America

8,7

10,5

0,0

9,9

Other

0,0

6,6

0,0

5,0

Source: Own Calculations

3Other relevant characteristics of respondents have to do with their publication record, as this shows whether they have experience with the process that publishing in a peer reviewed journal entails. Most of the journals in which respondents reported having published in the last five years are included in national and/or international indices, and all respondents have published in EJHET, History of Political Economy (HOPE) or JHET. While 90% of the participants reported that they had “heard about Impact Factors,” only 63,3% reported to have “heard from h-indexes.” Most interestingly, 70,3% claimed to know what IFs are, 25,7% affirmed that they “had an idea of what [IFs] are,” and only 4% reported not to know what these are, so we may assume that the historians of economics that answered the survey understand these metrics even if most of them do not know the IF values for particular history of economics journals. There are geographical differences in these answers, showing that historians of economics in France, Italy and South America, are more aware of the specific IF values for HOPE and Econometrica than in other parts of the world.

4The results of the survey indicate that respondents know what these metrics, and especially IFs, are, that they believe these metrics are a limited source of information that have had consequences on the scholarly practices of the members of the community, more so on those of their colleagues than on their own. In what follows, we detail these results, differentiating answers through different socio-demographic characteristics when possible, in the light of the qualitative answers gathered.

1. Impact on Career

5To begin with, we asked whether these metrics had any effect on the respondent’s career in order to get an idea of the evaluation and assessment practices and of the environments in which respondents pursue their professional careers. While 42,6% answered that it was possible (“Maybe”) or that they did not know whether IFs had affected their careers, 35,6% answered “No”, which might be surprising as most of them reported working in Economics Departments where publication in peer-reviewed journals included in rankings that use IF as part of the promotion criteria might play a role. However, most respondents are located in Europe (65,4%), and, specifically, in France and Italy (36%) where promotion procedures are regulated by national committees and include other criteria beyond publication records that are assessed using rankings that do not necessarily rely on IFs.

6This could be confirmed considering that 44% of those on tenure-track (15%) answered that IFs had not affected their careers and they are mostly located in Europe where the use of these metrics has not been as extended in time as in, say, the U.S., which could also explain why in a sample where the majority are over 45 years old and have a tenured position, most express doubts or report IFs have not affected their careers. Moreover, it is possible that rather than a number, as a respondent stated, what counts is the tier in a ranking where specific journals are classified rather than a specific measure, such as the IF. However, among those who report that their career might have been affected or that it has been affected, the effect reported on promotion and access to funding is slightly more positive than negative.

7Respondents, especially female (82,6%) and those on tenure-track (73,3%), state that they need to publish to advance their careers but this need does not seem to be analogous to the impact measured by these metrics. It is possible then that the quantitative evaluation mania expresses itself otherwise in the field, or that IFs and h-indexes do not show the whole picture on research assessment. Therefore, relevant information about the quality of research might be found elsewhere and the impact these metrics have on scholarly practices might be mitigated.

8These findings go in line with Cardoso’s contention in this symposium that these metrics do not have much impact on historians of economics. But the doubt expressed in the survey could also mean that this is still a recent change for the community and that there are other factors that weigh more in the scholarly practices of its members.

2. What these Metrics Measure

9Table 2 presents the respondents’ view of what IFs actually measure. The information gathered confirms that IFs are a limited source of information and, more importantly, that they are not a meaningful source of information about key aspects of research such as quality and originality, which can be associated with the contributions made to the field, a main motivation to publish for respondents. This is linked to the reasons why respondents decide to read an article. All respondents answered they read an article because it was relevant to their research and 96% reported it was because they knew the author was a good scholar. According to the information recorded in Table 2, respondents, unsurprisingly, do not rely on IFs to assess neither the relevance of the paper nor the quality of the researcher. It is possible, though, that IFs reflect such quality and originality in the visibility and centrality of papers and researchers. In all the dimensions explored in the survey, IFs appear to be more meaningful measures for journals than for papers or researchers.

Table 2. Percentage of Respondents Who Believe that “Impact Factors are a Meaningful Measure of”

Strongly agree

Agree

Disagree

Strongly disagree

N/A

The quality of a paper

3,0

18,8

42,6

28,7

6,9

The quality of a researcher

1,0

21,8

39,6

30,7

6,9

The quality of a journal

4,0

39,6

36,6

13,9

5,9

The visibility of a paper

17,8

58,4

10,9

7,9

5,0

The visibility of a researcher

13,9

51,5

20,8

8,9

5,0

The visibility of a journal

26,7

55,4

8,9

4,0

5,0

The relevance of a paper for a scholarly community

6,9

26,7

38,6

21,8

5,9

The relevance of a researcher for a scholarly community

5,0

24,8

43,6

18,8

7,9

The relevance of a journal for a scholarly community

5,9

44,6

27,7

13,9

7,9

The originality of a paper

1,0

13,9

39,6

37,6

7,9

The originality of a researcher

1,0

12,9

39,6

38,6

7,9

The originality of the work published in a particular journal

1,0

13,9

37,6

37,6

9,9

The centrality of a researcher in the scholarly network

8,9

49,5

21,8

12,9

6,9

The centrality of a journal in the scholarly network

14,9

53,5

13,9

8,9

8,9

Source: Own Calculations

10Table 2 shows respondents consider the IFs measure the visibility and centrality of a paper, a researcher and a journal but not their quality or originality. They seem to have a divided opinion about IFs as a meaningful measure of the relevance of a journal for the scholarly community, but most consider it does not reflect the relevance of a paper or a researcher. In line with their self-reported understanding of these metrics, respondents consider them as meaningful measures of some characteristics of journals and papers but not as much of researchers. IFs then would transmit a certain type of information but are insufficient to assess all relevant dimensions of research.

11In that sense, when asked about how they decide when in a position to hire or give an award to someone, respondents reported not using these metrics but rather the whole of the candidate’s dossier and relying on the opinions of colleagues. Tenured associate and full professors reported to disagree or strongly disagree that the IF is a proxy for the scholarly quality of candidates (38,46% and 64%, respectively). Therefore, as could be expected, most respondents either disagree or strongly disagree that these metrics are helpful in making their decisions easier. 38,5% of Associate Professors and more than 50% of Full Professors did not base their evaluations on them. Respondents reported IF could be a useful metric because it could indicate the vitality of a field, its visibility, and it could provide an objective measure but it is limited in the information it conveys. This would indicate a more comprehensive evaluation that goes beyond bibliometrics, and point at other sources of relevant information. Colleagues’ opinions and information that circulates informally among members of a scholarly community might be among such sources.

12Beyond the limited information IFs convey, some distrust towards this measure is perceived when respondents report that they think that IFs could be easily manipulated, as more than half answered “yes” or “maybe”, whereas a third answered they did not know, and 4% answered “no”. Almost half of respondents on tenure track, or who had between 10 and 15 publications in the last five years answered affirmatively to this question. Among those who provided examples as to how this manipulation could take place, most answers were related to citation practices like cross-citations, irrelevant or unnecessary citations, or citing in a shorter time window.

3. Publication Practices

13Turning to possible ways in which measures might have affected respondents’ publication practices, we asked about the reasons why respondents decided to submit a paper to a particular journal. We find the main motivation for their decision is the quality of the research published in a journal (91,1% strongly agree and agree), followed by the journal’s prestige (87% strongly agree and agree), and the promptness of the editorial decision (65,4% strongly agree and agree). As we had found that IFs might be a meaningful measure mostly of the centrality and visibility of a journal, there might be a relation with prestige and less so with quality. Centrality and visibility might also be connected with the position of a journal in rankings built using other or more criteria besides IFs. As in the case for candidates, the information they have about the quality of the articles published and the journals’ prestige seems to come from other sources. The answers seem to confirm this because 64,4% of respondents indicated they do not consider the IF of the journal when deciding where to submit their work but they do take into account the position of the journal in different rankings. While some rankings might rely on these metrics others, as noted above, take into consideration other information, such as the opinion of the scholarly community or of recognized scholars in the field, incorporating insider knowledge into the assessment and classification of research outlets. Moreover, there are many journals in the field that are not included in Clarivate’s measures so they are not listed in the Journal Citation Report. And for those that are included the difference in IFs is small, and does not seem to reflect why some are more prestigious within the scholarly community.

14Nevertheless, it is interesting to notice that when divided by gender the options “disagree” and “strongly disagree” about the importance of the IF value in their decision to choose where to submit their work, the percentage drops almost 8 percentage points from female to male, and the difference between feminine and masculine in the options strongly agree and agree with the IF as being an important criterion is of 12,7 percentage points. Female scholars seem to feel more affected by IFs when submitting their work. However, this is not the only difference by gender on the importance of the criteria considered when submitting a paper to a journal. None of the respondents identified as feminine disagrees or strongly disagrees with the journal’s prestige as being among the most important criteria, although they are more divided about the IF value (39,1% strongly agree and agree, and 56,5% strongly disagree and disagree) compared to those identified as masculine (26,4% strongly agree and agree, and 67,1% strongly agree and agree). Finally, the journal’s position in rankings also seems to matter more for those identified as feminine with a difference of 23,4 percentage points.

15So, even if publishing advances respondents’ careers, the advance goes beyond the information or signal IFs or h-indexes convey. Rather than searching for visibility or centrality, all respondents report “making a contribution to their field” as the main motivation to write publishable papers (99%), followed by the “importance for their careers” (68,32%) rather than just having a better h-index (75,24% disagree or strongly disagree with this motivation). In line with the motivation of making a contribution, more than half of respondents want to participate in the community’s conversation by writing about issues considered significant; this motivation is even more important for those on tenure-track (73,3%), and noticeably 65,2% of female respondents “want to write about issues that are hotly debated”. This could confirm that these metrics do not capture other, more relevant, dynamics within a scholarly community or in a researcher’s professional life related to publication. But larger professional forces might also be at play. Respondents doubt about the impact of IF’s on their careers, and seem to use other sources of information to assess the prestige of a journal, or the originality of a paper, or the relevance of a researcher, all probably associated with more insider dynamics and information. But 62,25% of respondents consider IFs have an impact on the visibility of History of Economic Thought (HET), 13,27% consider the impact is positive while 48,98% consider it is negative, and 30,61% say they do not know. Those identified as female are more divided in their answers than males, those who have published between 5 and 10 papers in the last five years agree less that IF has had a negative impact (37,84%) but we also find that in this category 40,54% answer they do not know, more than half of Assistant and Full Professors share the view of the negative impact whereas 46,15% of Associate Professors answered they did not know. Even if IFs appear as a rather incomplete information within the community, it is possible respondents consider IFs give information to other communities about the field.

  • 4 Drawing information from these answers is more difficult as not all respondents answered the questi (...)

16It is possible to draw a parallel between this perception of negative impact on the field as regarded by those in other fields with the answers related to impact on one own’s practices contrasted with those of colleagues in their roles as authors, referees and editors4. The parallel would point at the difference between the perception of what we might label as inside and outside information. That is, the sources of relevant information within the field are beyond IFs but they might be used by those outside the field and given that the IFs of the journals of the field are relatively low compared to those of other fields in economics, and especially to what are recognized as generalist journals, IFs have a negative impact on HET. Along similar lines, respondents, self-identified as historians of economic thought, report that these metrics have much larger consequences on the practices of their colleagues than on their own. The difference between both is considerable: 80,65% believe IFs affect the practices of their colleagues as authors, 29,03% as referees and 48,39% as editors, in contrast to 37,04%, 15,91% and 27,03% who think the metric affects their own scholarly practices. This might reflect a case of pluralistic ignorance in that participants think IFs impact more their colleagues’ practices than their own, combined with the previous results we found pointing at the greater perception of a negative impact on the visibility of the field, and a mostly undecided perception as to whether IFs have had consequences on their careers.

17Within the reasons why IFs might affect the practices of their colleagues, respondents mentioned strategic behavior and the pressure felt in tenure-track and early careers that might lead other historians of economics to select the journals they submit to, submitting to journals outside the field with higher IFs, and to accommodate their research to “better publishable” topics and methodologies. Regarding their own practices, respondents reported trying to publish in higher ranked journals, or framing their papers in certain ways to increase their possibility of being published, but others wrote it was more of a generational influence and it led to reading less and maybe more average publications. Even if the impact reported is much smaller for one’s own practices than for others’, there seems to be a coincidence in the type of strategic behavior adopted to increase the possibilities of publishing their research.

18Most of the respondents completed their doctoral programs between 2000 and 2019, almost 30% received their Ph.D. in the last ten years, and even if only 15% are Assistant Professors and 5% Graduate Students, while the larger part of respondents are Full or Associate Professors (64%), it is possible that the pressure associated with tenure-track or the consolidation of their careers, joined to the growing practice of quantitative and qualitative career assessments, is familiar to a large part of the sample. Thus, the reported smaller effect of these metrics over their own practices could be underestimated and that on others overestimated, leaving the actual situation closer to the adoption of a publication strategic behavior not completely associated or led by IFs, but where these metrics still play a role.

19However, even if respondents report that the impact of IFs on their colleagues’ practices is non- negligible and much larger than the impact on their own practices, when asked about actual practices 43,56% reported that they had never received a referee report from an HET journal with a strong suggestion to cite irrelevant work. Nevertheless, 52% of female respondents reported having received such strong suggestions many or a few times, as did 57,7% of Associate Professors and 54,16% of those between 45 and 64 years old. In these categories, more than half of female respondents and those between 45 and 64 years old included the suggested citation. It would seem specific members of the community have experienced referees’ suggestions that could be associated with IFs manipulation, but the true motivation is impossible to know, and have reacted as authors to avoid having their submission penalized or rejected. Moreover, 62,37% of respondents reported they had not made such a suggestion of citing work by themselves or by people they know, and when they said they had it was because they considered it was absolutely relevant to the submission they were evaluating. When asked whether they had received a letter from an editor of an HET journal strongly encouraging citing irrelevant work, 81,19% answered negatively, so it is possible to infer that when 48,39% of respondents report believing that IFs impact editors’ practices this possible mechanism to manipulate IFs does not seem to be what they have in mind. With these answers it is impossible to know the motivation of such suggestions and it might be possible that what a referee considers absolutely relevant work does not necessarily appear so to authors. Therefore there is no way of knowing with these results whether such suggestions are driven by IFs or other reasons. In any case, some respondents identify and have experienced practices that could be related with their perception of an impact of IFs on their colleagues’ practices as authors, referees, and editors.

4. Concluding Remarks

20Historians of economics that answered this survey report that IFs and h-indexes convey limited information, that is not the most relevant to their practices, even if they impact the practices of their colleagues as referees, authors, and editors. Our survey does not allow inferring any clear local dynamics in the geographical sense. There are no great differences between regions or countries even though respondents from Brazil seem relatively more aware and affected by IFs. Respondents from France, in their open answers, report national rankings are far more influential than research impact metrics on their own. Such rankings may take into account these measures but also reflect a somewhat general consensus about the influence and visibility of journals. The construction of these rankings relies on the consultation of experts in different areas and gives a broader and more complex view of the dynamics of economics in general.

21We have also found that answers are relatively similar across gender, ages and positions. However, respondents self-identified as female report higher impact on their careers and they seem to pay more attention to the journal’s IF when deciding where to submit their work. This seems to be also true, but to a lesser extent, for younger historians of economics. Senior members report in their open answers concern and little respect for IFs underscoring the negative effects it might have on scholarly practices. In general, when IFs have effects on historians of economics, these effects seem to be negative for the most part and respondents consider they promote strategic behaviors that are not necessarily related to quality. But, again, respondents report they consider these effects happen on their colleagues’ scholarly behavior rather than on their own.

22This survey offers a first systematic glance on the possible consequences of IFs and other research metrics on the scholarly practices of the members of the community of historians of economics. Respondents do not report IF is an important element in deciding where to submit their research or what to read or cite for their own work or as recommendations for others. However, in a small field such as the history of economic thought, citation among members of the community and of articles published in the journals in the field may eventually be measured as malpractice. The most salient conclusion is that even if we believe these measures have a small impact on our own scholarly practices they say something about the visibility and centrality of research, and they have a negative impact on the field and on the practices of our colleagues as authors.

Haut de page

Bibliographie

Casadevall, Arturo and Ferris C Fang. 2014. Causes for the Persistence of Impact Factor Mania. mBio, 5(2): 1-5.

Clarivate Analytics. 2018. 2017 Journal Citation Reports. Clarivate Analytics.

Krücken, Georg and Frank Meier. 2006. Turning the University into an Organizational Actor. In Gili Drori, John W. Meyer, and Hokyu Hwang (eds), Globalization and Organization: World Society and Organizational Change. Oxford: Oxford University Press, 241-257.

Tregoning, John. 2018. How Will You Judge Me if not by Impact Factor? Stop Saying that Publication Metrics Don’t Matter, and Tell Early-Career Researchers What Does, Says John Tregoning. Nature, 21(558): 345.

Haut de page

Notes

1 Casadevall and Fang (2014) define and trace “Impact Factor Mania” as the obsession that life sciences suffer with journal impact factor.

2 See the Introduction to this Symposium.

3 The SHOE (Societies for the History of Economics) list “is a moderated forum, sponsored by the History of Economics Society and the European Society for the History of Economic Thought” created in 1995 that counts with 1,180 subscribers from 42 countries (https://historyofeconomics.org/resources/shoe-list/). We thank Humberto Barreto for this information.

4 Drawing information from these answers is more difficult as not all respondents answered the questions for each role. 61,39%, 26,73% and 36,63% of the sample answered how they thought IFs affected their colleagues’ scholarly practices as Authors, Referees and Editors, and 80,2%, 43,56% and 36,63% answered on the effect on their own practices in each of these roles.

Haut de page

Pour citer cet article

Référence papier

Jimena Hurtado et Erich Pinzón-Fuchs, « Understanding the Effects of Journal Impact Factors on the Publishing Behavior of Historians of Economics »Œconomia, 11-3 | 2021, 485-496.

Référence électronique

Jimena Hurtado et Erich Pinzón-Fuchs, « Understanding the Effects of Journal Impact Factors on the Publishing Behavior of Historians of Economics »Œconomia [En ligne], 11-3 | 2021, mis en ligne le 01 septembre 2021, consulté le 04 octobre 2023. URL : http://journals.openedition.org/oeconomia/11489 ; DOI : https://doi.org/10.4000/oeconomia.11489

Haut de page

Auteurs

Jimena Hurtado

Professor, Economics Department, Universidad de los Andes, jihurtad@uniandes.edu.co

Articles du même auteur

Erich Pinzón-Fuchs

Assistant Professor, Escuela de Economía, Universidad Nacional de Colombia, erapinzonfu@unal.edu.co

Articles du même auteur

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search