Navigation – Plan du site

AccueilNúmerosvol. 18 (2)Dossiê: "Anthropology and the neo...The impact of impact

Dossiê: "Anthropology and the neoliberal agenda"

The impact of impact

O impacto do impacto
Caroline Knowles et Roger Burrows
p. 237-254

Résumés

This paper explores ways in which the new preoccupation with “impact” – understood as “influence” beyond the academy – formalised in the 2014 Research Excellence Framework (REF) in UK universities reshapes the working conditions and practices in which contemporary anthropology and sociology are produced and, ultimately, what these disciplines are able to be. It suggests that impact, in concert with broader changes in, what we might think of as, the “metricization” of higher education, reshapes the relationship between universities and government bringing new cultures of precarity to these disciplines. This paper ruminates on how impact – a new addition to the metric assemblages that now dominate universities – shapes the kinds of research we can do, as well as the conditions in which we do it. It notes the deepening competition, the narrowing of disciplines, and the emphasis on the visibility and performance of intellectual labour.

Haut de page

Texte intégral

Metrics, markets and high education in the United Kingdom

1For Foucault to equate neoliberalism with laissez-faire – where the role of the state is largely restricted to supervising the market – is an analytic error. ­Foucault suggests this relationship between state and market under neoliberalism is, in fact, the converse: “a state under the supervision of the market rather than a market supervised by the state” (Foucault 2008: 116). Under this model, the only mechanism by which the state can legitimate itself is via “self-marketization”. The neoliberal state has to secure the freedom of markets but it can only do this with authority if it extends the same logic of the market to its own organisational structures and practices. Rather than viewing markets as primarily spaces of exchange – as is the case with laissez-faire – markets have to be viewed as primarily sites of competition. Under his depiction of neoliberalism – as a form of active statecraft within which the state must engage in all manner of “internal” strategies in order to legitimate its power over “external” market processes – it is no longer a matter of whether the market impinges upon state activities but how it does so. This means that privatization strategies have to be viewed as an inherent part of contemporary modes of governmentality and where “real” markets cannot be enacted, some form of “simulated” market has to be endorsed. In what we used to think of as the “public sector” in the UK this has been done through the introduction of audit and various forms of metrics that enable systematic comparisons between individuals, organisational agglomerations and institutions. In what follows we are interested in the impact of introducing such processes to the sphere of higher education, the state funding of research in the humanities and social sciences especially, and with the introduction of new measures of “impact” within this context in particular.

2The growing importance of metrics in higher education has recently been the subject of much discussion (Burrows 2012; De Angelis and Harvie 2009; Holmwood 2011; Howie 2005; Lock and Martins 2011; Monatersky 2005). In the UK the life-world of the university is increasingly enacted through complex data assemblages drawing upon all manner of emissions emanating from routine academic practices such as recruiting students, teaching, marking, giving feedback, applying for research funding, publishing and citing the work of others. Some of these emissions are digital by-products of routine transactions (such as journal citations), others have to be collected by means of surveys or other formal data collection techniques (such as the National Student Survey – NSS) and others require the formation of an expensive bureaucratic edifice designed to assess the quality of administrative, teaching and research work (such as – the primary focus of this paper – impact in research assessment exercises).

3The performative co-construction (Saetnan, Lomell and Hammer 2011) of academic life through multiple metrics – such as the NSS, the Transparent Approach to Costing (TRAC) data, data on average entry qualification tariffs, PhD completion rates, research income per capita, individual and group h-indices, journal impact factors, Quality Assurance Agency (QAA) subject and institutional reviews and so on, and so on – is ubiquitous. Increasingly such data is also being formally aggregated into any number of commercially-driven ranking and “league” table systems, such as those developed by various national newspapers and, now, at a global level, by Times Higher Education (THE). Adopting a view of such data assemblages as not simply imprints or products of the social world, but as actively constituting that world, leads us to focus on the work that new technologies of value and measure do in constituting the university and recursively defining its practices and subjects (Burrows 2012).

4As De Angelis and Harvie (2009: 17-24) note, different metrics operate at different scales: some at the level of the individual; some at departmental, school or faculty level; some at institutional level; some at national level; and some at international level. However, they are all folded into each other to form a complex data assemblage that confronts the individual academic. It would be quite easy to generate a list of over 100 different nested measures to which each individual academic in the UK is now (potentially) subject. However, for our purposes here, a few pertinent examples drawn from research assessment (RAE) exercises in general and the emerging “impact agenda” in particular will suffice to draw out the implications of carrying out academic work in a world dominated by numbers. We begin with a brief recent history of the funding of research in the UK.

The funding of research in the UK: some background context

5University research in the UK has long been subject to something called a “dual support system” made up of two parts: block-grants provided by the government in order to underpin research capacity; and, second, funds for specific research grants, made available by competition, administered by the research councils. These two sources of funding for research rely on very different administrative processes. Although it has always been clear on what basis specific research grants were awarded – peer-review and competition – the allocation of block grants has been a very different matter. In what follows we attempt to provide a short summary of the development of this allocation – by way of context for the main thrust of what we want to argue in the paper – as described in sources such as Bence and Oppenheim (2005), Hicks (2009), Johnes, Taylor and Francis (1993) and Kelly and Burrows (2012).

6Up until the mid-1980s it would be fair to say that the allocation of block grants for research was very opaque. At that time the University Grants Committee (UGC) was responsible for their allocation and, along with other public sector bodies, was encouraged by the Thatcher regime to take measures of performance seriously in the allocation of funds between institutions. In 1985-6 an attempt was made to judge the relative quality of university-based research. The criteria for judgement in this first Research Selectivity Exercise (RSE), as it came to be known, was however hardly any more transparent than had previously been the case. Each subject area was asked to produce a brief “research profile”, within which, it was suggested, might be information on: indices of any financial support for research; staff and research student numbers; any measures of research performance deemed significant; a statement of current and likely future research priorities; and the titles of no more than five books or articles produced since 1980 considered typical of the best research. In May 1986 the “results” of this first RSE were published to some consternation within the academy. Each subject within each university had been judged as either “outstanding”, “better than average”, “about average”, or “below average”.

7A more robust second attempt was made by what had become the University Funding Council (UFC) in 1989. This second RSE was taken more seriously as it became increasingly apparent that the results would significantly impact upon funding allocations. The 1989 RSE was based on “informed peer review” from 70 advisory groups and panels, containing 300 academics. This time the panels were provided with more structured data on research performance including: the number of publications in relation to the number of full-time academic staff; bibliographical details of up to two publications for each full-time member of academic staff; the number and value of research grants and contracts; and the number of research studentships.

8This information was used by each advisory group to rate units of assessment on a five-point scale using the rhetoric of “national level” and “international level” excellence. So, for example: the lowest rating was a “1” meaning that national levels of excellence existed in none, or virtually none, of the sub-areas of activity; the mid-point was “3” meaning national levels of excellence in the majority of the sub-areas of activity; and the top-rating, “5”, was defined as international levels of excellence in some sub-areas of activity and national level in virtually all others.

9By the time of the third exercise in 1992 the university sector had expanded to include ex-polytechnics. Each institution was now invited to select “research active” staff in post at the time of the assessment. Each assessment was divided into 72 academic units of assessment (UoAs). The data became more extensive; in addition to each academic nominating two publications, quantitative information on all publications was required. Each submission was then ranked on a five-point scale similar to the one used in 1989. The allocation of resource by the UFC was based upon a “quality” measure using this scale, and a “volume” measure based upon the number of “research active” staff submitted.

10The fourth exercise in 1996 relied less upon quantitative measures of research output and more on the supposed “quality” of publications. “Research active” staff in 69 different UoAs had to provide details of up to four publications published during the period of assessment; this was supplemented with details of “indications of peer esteem” in the form of editorships of prestigious journals, papers given at key conferences and so on. The rating scale was further finessed (to become, essentially, a seven-point scale) with the introduction of a new “top” five-star rating and the former band “3” being subdivided. “Measurement” of “quality” was again undertaken by peer review panels and resources were again based on the quality grade multiplied by the volume of research active staff.

11Further recalibrations in the fifth exercise in 2001 aimed to make it more transparent. However the essence of the assessment remained intact even though the descriptions attached to the rating scales were reworded. UoAs awarded a five-star in 2001 who had also received the same rating in 1996 were awarded a new six-star rating to produce a new eight-point scale.

12Throughout this period information gathering became ever more detailed and prescribed. The combination of an ever more refined quality rating traded off against a volume measure inevitably led to “game-playing” by universities. Anyone who worked in the UK higher education sector during this period will attest to how much academic and organizational practices have been incrementally recalibrated in relation to the RAE. Increasingly, the mundane realities of academic life have been recursively lived not only through the exercises themselves, but also through institutional imaginings of what future exercises might bring. Indeed, orientating towards the RAE and scenario planning for possible outcomes has become central to the routine discourse of futurism permeating university life (Burrows 2012).

13For the sixth and most recent exercise, the results of which were published in 2008, the process of research quality assessment was fundamentally altered to produce a rating system that better approximated an interval level of measurement. Each submission was given a “quality profile” constructed from three sub-profiles relating to “outputs”, “research environment” and “esteem”. The weightings attached to each varied between UoAs. In the case of sociology “outputs” – our own disciplinary base at the time – were weighted at 75 per cent, “environment” at 20 per cent and “esteem” at 5 per cent. A panel of 16 peers examined – in the case of sociology – 39 detailed submissions containing information on: four publications for each member of staff submitted; a detailed narrative and statistical data on the research environment; and a narrative on various esteem measures. Each output was evaluated as follows: Four star – quality that is world-leading in terms of originality, significance and rigour; Three star – quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence; Two star – quality that is recognised internationally in terms of originality, significance and rigour; One star – quality that is recognised nationally in terms of originality, significance and rigour; and “Unclassified Quality” that falls below the standard of nationally recognised work, or work which does not meet the published definition of research for the purposes of the assessment.

14In the next iteration – now called the Research Excellence Framework (REF) – with results expected in late 2014, we face something new in the mix: the “impact agenda”. As part of a move to demonstrate that our work as academics has a “value” outside of the academy we are increasingly subject to a range of administrative processes that demand that we can demonstrate that the research that we carry out, and the outputs that result from it, possess some utility to non-academics and that they possess causal powers to influence the world in some way or another. This notion was first introduced ahead of the REF 2014 by UK grant funding councils – the Arts and Humanities Research Council (AHRC), the Economic and Social Research Council (ESRC) and so on – as part of the grant application process. As well as making an academic case for support, applicants for funding now have to complete extensive forms detailing both proposed “pathways to impact” (what will be done to generate influence on the world) and an impact summary (what that impact is likely to be). But now the notion of impact is to be a central feature of the algorithms used to distribute block grant and as such is a new and central feature of the REF. In what follows we attempt to interrogate the likely impact of this impact agenda.

Constituting impact

15Impact then is a relatively new addition to the data assemblages of the metric moment described above. Impact deepens the self-marketization of the state through introducing new forms of competition in quasi markets, which enact and constitute the academic world and the ways in which we can live in it. Impact is the new tool in the still-gathering audit and metric culture in UK higher education producing new statements of account; new forms of verification and new reckoning requiring new visibilities (Shore and Wright 2000: 59). Audit is a term that has broken loose from its moorings in financial accounting so that it is now “applicable to all kinds of reckonings, evaluations and measurements” (Strathern 2000: 2). Universities find new ways of making themselves auditable and impact is only the latest of these.

16In HEFCE’s (the Higher Education Funding Council for England) calculations, impact is about exerting influence beyond the academy. At an intuitive level this is neither new nor problematic. On the contrary, academics have always exerted influence beyond the academy through core practices – teaching students – within it. University teaching shapes successive generations of educated citizens, crucially enabling them to develop capacities in critical reasoning and intelligent participation in the issues and debates of the day; capacities that travel beyond universities and unfold in the production of an educated society. Oddly, given the centrality and pervasiveness of this form of influence, this doesn’t count in HEFCE’s formulation of impact. Only research, as will become clear below, counts in calculations of impact. In sociology and anthropology, research impact should not be problematic: the production and logics of social fabrics are our core business and it would be strange if we were not concerned with influencing them. It is hard to imagine a social issue or a set of circumstances that would not in some way benefit from the influence of sociological or anthropological investigation and analysis. But HEFCE’s impact agenda does not in any way embrace this intuitive version of impact.

17In establishing the “impact” agenda in the 2014 REF, HEFCE intends to: “make explicit the benefits of research, communicate these more publicly, provide compelling evidence to the Government, and improve public understanding of research and its benefits to society” (HEFCE 2010: 11). Through research HEFCE intends universities to raise their public profile and thus establish their social value in beneficial collaboration (our emphasis) with industry and public and third sector organizations; something achieved more easily in some disciplines than others. Financial consequences follow. In the next cycle of research audit 20% of government support to universities (QR money in “administration speak”) will depend on how far ideas and arguments have “demonstrably” circulated beyond the university in “measurable” ways. This could mean a very large sum of money for a single top rated impact case study, and this could prove to be crucial for the survival of some already cash-starved departments suffering historic underinvestment.

18HEFCE’s definition of impact appears to embrace a range of influences: “social, economic, cultural, environmental, health and quality of life benefits” (HEFCE 2010: 13) judged by individual subject panels. Thus both the Sociology and the Anthropology and Development panels will establish the constitution of impact in their discipline when they review the “research environment commentaries” (nine-ten pages) and a statement about the specificities of particular “approaches to impact” (three pages) both of which set out institutional commitment to impact embedded in units of assessment and their research practices; and the “impact case studies” which demonstrate where and how these commitments have been successful, roughly one for every ten academics in the REF. Procedurally, at least, this appears to be open, subject-sensitive and allow for incommensurability between disciplines. Of course this places a large responsibility on REF panels, so the ways in which these are constituted and pursue particular versions of disciplines is important in the treatment of impact as well as judgement of publications (65% of the overall evaluation) – until now the main arena of assessment.

19Two things immediately narrow this seemingly open subject sensitive approach. First is HEFCE’s framing of impact and second the ways in which it must be evidenced. “Reach” and “significance” are the words offered by HEFCE to help us think about impact. Reach allows for broad rather than deep social influence and significance allows for influence to be concentrated in a small area or set of issues. In both cases it must be demonstrated that something has been “improved”, “enriched” or “informed” by academic research. But where is this “something” situated? In the land and spaces of entities termed “users” and “beneficiaries” – preferred over “stakeholders” in this new argot – identified as the academy’s new external target audiences for research. Because metrics are less developed in this area of the REF than they are in relation to publications, connections with and influence over these amorphous users and beneficiaries – groups, individuals and organisations – must be precisely evidenced. Thus the dual poles of influence – reach and significance – must be demonstrated in impact case studies (“ICSs” in the new argot) in ways that are auditable, “provable” and detailed (Dunleavy 2012). Moreover research outputs, or books and articles as some of us still like to call them, must be linked to impact case studies. This brings another layer of assessment to peer review and links the two arenas of assessment in impact and publication.

20Furthermore impact case studies must be quality-assessed – judged to be at least at a two star level (four star being the highest) – which means they must at a minimum be considered by the panel to be “internationally recognised”. This is a high bar. And what does it mean? Dunleavy (2012) has suggested that this requirement will inevitably produce inflated “fairy tales of influence” in the UK academy. And finally all of this internationally recognised “fabulousness” must line up within a specific time frame: between 2008 – the date of the last audit – and 2013 – the cut off census date for the next: although the original research generative of impact could have taken place as far back as 1993! Earlier and later external influence is inadmissible ruling out long-term influence developed over decades. As Strathern (2000: 2) sagely warns – in audit cultures only certain operations count. They do indeed.

New reckonings

21In what follows we take a close look at how impact shapes universities and the ways in which we work within them. We will argue that impact, and the audit cultures it co-constitutes and deepens, are further reformulating intellectual labour through research agendas and practices, which in turn reconstitute subjects like sociology and anthropology as well as the social relationships of department and university life. Inevitably, as with the publication based REF, the effects of the impact agenda will only be fully apparent over time. As the impact REF results are announced, we can predict that university researchers will be encouraged to adjust research agendas and practices to take account of impact judgements in the hope of maximising scores and revenues in the next round of audit, not least because the survival of jobs, departments, and universities will depend on it. Beyond these calculations new versions of futurology will be imaginatively construed to who knows what effect in the future once the impact genie is out of the bottle.

Research agendas

22The impact agenda will potentially reformulate research agendas inclining sociology and anthropology towards areas of research that more readily lend themselves to clear demonstrations of impact in the ways HEFCE requires. The more abstract, esoteric, speculative investigations that have a “back stage” function in developing our disciplines conceptually but which have limited appeal to broader publics are in danger of disappearing. Concepts like “assemblage” as a way of conceptualising cities; concepts like “mobility” and “dwelling” developed to think about rhythms of contemporary movement and consociation provide examples of research that does not readily lend itself to the impact agenda as HEFCE has formulated it. Of course these concepts have impact: they run through neighbourhoods and they map onto the ways in which people live and they have political and policy ramifications albeit indirectly. But their influence on public thinking is likely to be slower, more subtle and indirect, and, hence, more difficult to line up with clear auditable demonstrations of influence. And, since the impact agenda became a significant part of UK research council’s application process – more of this later – research projects weighted towards conceptual interrogation like these are less likely to be funded anyway.

23Judged against impact case studies with clearer short-term gains and more obvious connections to social policy and political calculation, overtly conceptually calibrated research will inevitably drop down research agendas as new hierarchies around cheap gains in public engagement are implicitly (or even explicitly) constructed. Such judgements support HEFCE insistence that universities and their constituting disciplines publicly demonstrate their usefulness in government-led policy and political agendas: or go out of business. The reframing of universities in such utilitarian terms have, until now, largely been confined to their role in teaching. The idea that the university cannot be other than a mechanism for the accumulation of social and financial advantage – cashed-in by students in labour markets – has now entrenched itself in the recalibration of research wielding social policy and political influence. As university academics we have long tolerated, while grumbling about, these utilitarian arguments in relationship to teaching and now, it seems we have allowed them in the rationale for research too. Consequently any remaining traces of the university as a place for reflection and creative thinking is extinguished by the deepening utilitarian influence of the impact agenda.

24Most areas of sociological and anthropological research have a clear and demonstrable relationship with what is happening outside of the academy. Review any area of scholarship: research on cities addresses those who live in them, build them and manage them; research on migration addresses migrants, border control and the social impact of mobility and restriction. Work on class, gender and ethnicity, in addition to elaborating the meaning and operation of these concepts, in practice addresses multiple publics, public culture, government thinking and forms of redress against social injustice on account of them. This research should, in theory, be rewarded in the new reckonings of HEFCE’s impact agenda. But influence, as we showed earlier, has particular forms of demonstration. In order to maximise the gains of influence, research needs to be more targeted than it has hitherto been on circumstances and issues that are deemed important in government agendas and on targeted user groups; both factors which further narrow the focus of what we research and, ultimately the content of our disciplines – which are research-fed.

25Government influence on research agendas has hitherto operated in distant and opaque ways, but the impact agenda makes this connection visible and enforceable, shaping which issues and problems count as legitimate research and which do not. As we suggested earlier, impact was trialled by UK research councils before it was absorbed into the REF. In addition to establishing at length the likely impact of research in funding applications, applicants are also required to detail “pathways to impact” which sets out the strategies researchers will deploy to publicise and otherwise raise the profile of their research with organisations and broader publics. In other words, impact is already the currency of research bids, explicitly embedded in government assessments of significance in dispersing research funding to university researchers. The 2014 REF attaches increased significance to impact in linking it to further sources of funding outside of those offered by research councils, thus consolidating government influence over university researchers’ research agendas.

26Some recent examples illuminate the synergies between government and academic research agendas enacted through HEFCE and the research councils. ESRC funding for migration, for example, is narrowly focussed on two things – reducing migration and “integrating” migrants – that reveal a limited approach to migration excluding other avenues and concerns. This particular government agenda is not limited to migration research: it also surfaced in the ESRC “Connected Communities” call for research bids. This is a major programme of significant funding that interjects into research agendas what we might badge as successive governments’ social cohesion policies. Without declaration these initiatives explicitly address successive governments’ concern with racialized social tension, urban unrest erupting occasionally into urban disturbances – “riots” – and an exaggerated anxiety about the parallel social worlds that result from “ethnic enclaves” in UK cities. Successful bids to these research programmes must be written with these political concerns in mind. We are not suggesting that these are improper concerns or areas of research, but they are narrowly conceived and exclude other approaches, framings and questions, by establishing official versions of research significance while broadly dictating the terms on which research problems can be tackled.

27This limits researchers to particular types of research and particular frameworks; both of which have consequences in enabling further research and structuring both the empirical and conceptual development of our disciplines. When research from the programmes noted above is reported at conferences and in publications, concepts like “connection” and its veracity in building “communities” are rarely called into question. Critical work on either of these concepts is unlikely to result from these programmes, not least because, in accepting government funding and with an eye to follow-on resources, researchers buy-in to certain preconceptions, thus limiting the exercise of critical capacities through self-censorship. “Integration” and “cohesion” are rarely challenged as the lexicon and conceptual framing for migration: taking government money carries invisible commitments that have consequences not just in what we research, but also in the very formulation of our research-fed disciplines.

28These circumstances bear a striking resemblance to those of the Chinese academy, which is often criticised for a lack of academic freedom. The Chinese Academy of Social Sciences (CASS) in Beijing employs large numbers of sociologists and demographers researching rural to urban migration and middle class tastes. The government’s focus on integrating rural migrants into China’s ever expanding cities in ways that avoid civil unrest; and the shift from export-led growth to growth through middle class consumption are explicitly enacted in CASS’s research agendas. While the People’s Republic directs its research agendas openly, there is little between this and the system in the UK apart from the rhetoric of “academic freedom”; the basis of which is fast being eroded as Vered Amit (2000) points out in relation to Canada. In the UK, HEFCE’s impact agenda consolidates through new strategies the government’s hold over university researchers’ research agendas.

29The irony of this consolidation is that the UK government rarely takes an interest in the results of the research it has – albeit indirectly through research councils – commissioned. In the UK we are seeing the era of evidence-free social policy despite protestations to the contrary. The Coalition Government’s pronouncements on social mobility, for example, are oblivious to what this might be, how to measure it, or implement changes to increase it. And yet it has funded sociologists to investigate these things. Similarly, recent ministerial pronouncements on the causes and cultures of poverty and third generation “scroungers” un-problematically aggregated to compose an urban underclass – justifying cuts in welfare budgets – are untroubled by the research evidence suggesting there is no such thing as an underclass. The dead hand of government in anthropological / sociological research could have a positive side in dislodging misplaced popular perceptions. But it doesn’t because they don’t care what we find out despite having commissioned it. It is tempting to conclude that successive governments are completely cynical about the evidence uncovered by social researchers, especially when it doesn’t validate what they believe to be true. It is also tempting to conclude that the influence of governments, deepened by the impact agenda, is actually about controlling the production of knowledge itself and the research activities of those of us who produce it rather than in using research evidence to finesse social and political policies.

Research procedures

30We have shown above that the impact agenda will have consequences in the ways in which we formulate sociological and anthropological research plans and agendas. We also think it will impact on ways in which we conduct our research. The requirements of influence settle on user groups in HEFCE’s formulation of impact so that academic researchers’ connections with user groups are the place where claims of influence are both made good and monitored. We might expect, therefore, that user groups assume new significance both in the planning and execution of sociological and anthropological research. This means negotiating new research partnerships with non-academic users of our research: thus reconfiguring the social relations of research in addition to making its contents and outcomes user-friendly. This carries obvious dangers in focussing on established and amenable connections and partnerships, and, perhaps safe, well-worn issues and problems at the expense of new and more challenging ones that may not yield the desired auditable influence. Targeted user groups are perhaps the easiest way to demonstrate auditable impact and this implicitly encourages “grooming” by academics of favoured or easy to work with groups and organisations with particular concerns that come to constitute a soft infrastructure that can be called on in successive research projects. This soft infrastructure of verification encourages easier and more accessible stories of academic influence. Inevitably there is a danger here of focussing on the documentable appearance of an issue and its designated audiences / beneficiaries / targets rather than its substance. There is a danger that the availability of a connection and a better story takes priority over the importance of an issue with a complicated structure in which impact success is not ensured or amenable to documentation in the terms demanded. These circumstances will either further limit what we research through how we can do it and document the desired impacts on academic-friendly users; or it will further proliferate the tick-box cynicism endemic in audit cultures. Neither is desirable and both compromise integrity; but perhaps it is now too late to worry about something we incrementally gave up long ago.

Competition

31Research impact is the new currency of competition, deepening the myriad forms of metric assemblages outlined above, co-constituting universities and researchers. Through the forms of competition already enacted in the publication-based REF we can gauge the likely effects of deepening competition through impact on working practices and institution life at a micro-level. Obviously impact provides universities with another way of ranking researchers, that reaches beyond publication, enabling further comparisons within and between researchers, departments and universities. Thus impact proffers new ways of comparing ourselves to our colleagues. It provides new measures of our value through influence that will be reflected, as we know from the experience of the publication-based REF, in promotion and career progression, salaries, research leave and current and future employability: already fraught competitive arenas in universities. New job in / securities potentially follow with researchers facing accelerated / stalled careers, widening inequalities and, at worst, un-employability as universities react by competing for talent-with-impact or (worse) imagined impact potential when hiring, promoting and so on. There is already an academic version of what football managers think of as a transfer market in relation to publication – impact can only extend these forms of competition. Depending on how it is managed, this new strand of commensurability, unleashing new forms of competition, will deepen existing reconfigurations of everyday academic life and social relationships of work in friction, envy, despair, anxiety, withdrawal, stress; with potentially over-rated successes and inflated egos among the winners – recalibrating structures of feeling to encompass ever new domains of success and failure. In short, impact establishes new ways of assessing individual and collective value in a deepening competitive environment that has important consequences in the quality of the daily working environment as well as its resourcing.

New visibilities

32One final, and highly significant, dimension of the new reckonings – visibility – is clearly flagged by HEFCE in insisting that universities make visible the results and value of academic research as public goods. We think that these new demands on visibility have potential to shift academic cultures onto new ground – creating new cultures of visibility – with some far-reaching consequences.Visibility works across two interconnected surfaces: in universities competing with each other for students and research resources; and among academic researchers competing with each other for a whole range of things exposed in metric assemblages and audit cultures including impact. These visibilities deepen what Amit (2000: 218) describes as the “panopticization of the university”. The regimes of panoptical visibility Amit so ably describes have taken a new turn with the heightened emphasis on visibility enacted in the impact agenda.

33The public performance of academic labour has enhanced significance in the new regimes of visibility. This explains the exclusion of teaching from impact and the motivation to bring new measures to bear upon publications. It is no longer adequate to write scholarly books and articles or to try out the thinking in them on our students: these activities are not visible enough, or perhaps they are visible to the wrong people – other academics and students – raising important questions about what counts as visibility and to whom. Thus the high vis academic must pitch research into public domains where competing visibilities jostle for attention. Influence in places where it is hard to achieve is what counts most and while the impact agenda does not explicitly counsel cultivating extensive media attention this is clearly an arena with high gains in developing general levels of visibility. In this context the high vis academic tweets, blogs or otherwise makes visible every thought and activity in the new domains in which value is judged. Department Websites, twitter accounts and blogs “buzz” with our activities. It is not enough to do what we do: we must further perform the results of our labours in ways that can be seen by ever-new audiences. It is not what we do that matters but what we are seen to do by those who count or who can be counted. Thus impact seeps beyond its objects of intervention and measurement and constitutes the very habitus in which we operate and in which what we do is made to count in new ways. Celebrity is no longer the domain of movie, sports and rock stars: the celebrity academic – until recently confined to historians and scientists – is the creation of the impact agenda. The logics of impact provide new opportunities for the visible performance of intellectual labour and this is how we will be judged in future.

34The high profile performance of academic labour implicitly compresses claims to excellence that might / not be entirely justified or necessary. Here we see convergence between visibility and HEFCE’s calibration of impact as having to demonstrate two star capacity, meaning international standing and above. Making claims to excellence – concomitant with high visibility and, even, celebrity – in order to satisfy audit procedures potentially undermines the credibility of UK universities and academics, which have hitherto a rather good reputation for generally decent standards of education and scholarship. HFCE’s enactment of impact explicitly encourages inflated claims and misplaced grandeur. Thus in order to be successful sociologists and anthropologists’ impact narratives will inevitably be drawn into this trap, which poses three key problems for our disciplines.

35Firstly, it replicates the kinds of narratives and impression management many of us have spent our professional lives debunking, offering instead more considered portraits. Secondly, it implicitly asserts superiority of the UK over other national academies, which as anthropologists are painfully aware, given the origins of the discipline in colonial administration, may not be the best pose for the academies of an ex-imperial power to strike. Thirdly, this generates a context in which (almost) no one (celebrities being the exception, raising questions about how celebrity is generated) is ever good enough. In a world where only world-leading excellence counts for anything there is clearly no point in simply being rather good at what you do. Therein lies failure – the failure of us all to measure up.

36Two scenarios arise from this. Universities have problems recruiting academics that are “good enough”. We have all heard the stories from, and some of us have firsthand experience of, stalled recruitment processes that come to the conclusion that no one is good enough to appoint. In the distorted logics of excellence most of us are failures. The second scenario is that as no one – possibly not even university administrators – actually believes these claims to stratospheric significance, they are widely known to be empty rhetoric with no substance. In this scenario, impact’s new instantiation of intellectual nationalism convinces no one and make UK academics look inappropriately self-important at international conferences. Both scenarios coexist, we think.

37New visibilities – distended in ways we have still to discover – augment the repertoires of universities, which log the activities of high vis academics in building what are interconnected subject and institutional profiles. The visibility and, by extension, popularity of competing disciplines to potential students has new resonance in the UK as state funding from all but science, technology, engineering and medicine (STEM subjects) is placed under strict new limitations effectively privatising sociology and anthropology and deepening the enactment of markets in what was formerly public provision. Thus UK universities are no longer really public institutions in the sense that they are in France or Germany, but have been privatised, while the government regulates both student numbers and the fees universities can charge: a quasi market. The social sciences, arts and humanities are thus funded by students’ fees which have been increased nationwide to £9000 a year, a threefold increase on the former rate, which is a threefold increase on the rate before that. Fees are treated as student loans – just privatized – to be repaid throughout the students’ working life to financial institutions charging rates of interest (RPI + 3% currently) in excess of those normally available to borrowers.

38The impact of this is yet to be seen in student recruitment: and with it the viability of sociology and anthropology in these new quasi markets in UK universities. It is possible that student demand – structured in a popular rhetoric of employability and reward versus education costs – will lead to a serious contraction of our disciplines. Maintaining our viability through student fees depends on students’ willingness to incur the kinds of debt necessary to finance their sociology and anthropology education in a context of declining employment possibilities. Thus governments have transferred the funding of the arts and humanities onto the next generation, set, in the current economic climate to become, as elsewhere in Europe, a precariat of graduates with no obvious mechanism for alleviating their indebtedness. It remains to be seen what the effect of student demand in subjects like ours – that may experience difficulty in being visible, that may have difficulty in making their public impact felt in ways that count and can be counted, that have been judged less important than subjects attracting government funding – will be. It will be important to monitor the closing and consolidation of sociology and anthropology departments. Visibility is a pernicious tool and it changes everything.

Conclusions

39In deepening the metric moment, HEFCE’s rendering of impact raises serious questions about the content and conduct of research, and ultimately the intellectual configuration of our disciplines. These questions concern the character and organization of collaboration between researchers and broader publics, groups and organizations; what counts as collaboration and how this is enacted in how we conduct research and publicize new data. What follows from these new arrangements in the ways in which we do our jobs in the new cultures of visibility and deepening demands of excellence? How are the logics of regimes of visibility actually constituted in practice, and what counts as visibility, and to whom?

40It is possible that we have reached a point where the metric assemblages of the current conjuncture of data-driven governance are of such a density and “sophistication” that they take us to a point “beyond the audit culture”; towards a different hegemonic project where systems of “quantified control” begin to possess their own specificity beyond mere auditing procedures; where there develops an ability not just to mimic, but to enact competitive market processes. Impact has tipped us over this point, providing new arenas for the enactment of competition and commensurability which have far reaching implications in shaping what we might research, how we conduct ourselves in the process, and, thus, how we add new knowledge to sociology and anthropology. In the value placed on visibility rather than substance, impact and the attendant requirements of visibility enact new forms of competition with pernicious consequences for daily working practices and the social relationships constituting collegiality. New regimes of visibility reconfigure both academic labour and the content and conduct of our disciplines, shaping their very survival in the newly extended competitive arenas of market-driven governance.

Haut de page

Bibliographie

AMIT, Vered, 2000, “The University as panopticon: moral claims and reckonings on academic freedom”, in Marilyn Strathern (ed.), Audit Cultures: Anthropological Approaches to Accountability in Academic Practice and Beyond. London and New York, Routledge, 215-235.

BENCE, Valerie, and Charles OPPENHEIM, 2005, “The evolution of the UK’s research assessment exercise: publications, performance and perceptions”, Journal of Educational Administration and History, 37 (2): 137-155.

BURROWS, Roger, 2012, “Living with the h-Index? Metric assemblages in the contemporary academy”, Sociological Review, 60 (2): 355-372.

DE ANGELIS, Massimo, and David HARVIE, 2009, “ ‘Cognitive capitalism’ and the rat-race: how capital measures immaterial labour in British universities”, Historical Materialism, 17 (3): 3-30.

DUNLEAVY, Patrick, 2012, “REF advice note 1: understanding HEFCE’s definition of impact”, LSE Blog, available at <http: /  / blogs.lse.ac.uk / impactofsocialsciences / 2012 / 10 / 22 / dunleavy-ref-advice-1 / > (last access 2014, May).

FOUCAULT, Michel, 2008, The Birth of Biopolitics: Lectures at the Collège de France 1978-79. Basingstoke, Palgrave.

HEFCE, 2010, Research Excellence Framework Impact Pilot Exercises: Findings of the Expert Panels, November 11, available at <http: /  / www.ref.ac.uk / pubs / refimpactpilotexercisefindingsoftheexpertpanels / > (last access 2014, May).

HICKS, Diana, 2009, “Evolving regimes of multi-university research evaluation”, Higher Education, 57: 393-404.

HOLMWOOD, John, 2011, “TRACked and FECked: how audits undermine the arts, humanities and social sciences”, Exquisite Life: Research Blogs, available at <http: /  / exquisitelife.researchresearch.com / exquisite_life / 2011 / 03 / tracked-and-fecked-how-audits-undermine-the-arts-humanities-and-social-sciences.html> (last access 2014, May).

HOWIE, Gillian, 2005, “Universities in the UK: drowning by numbers”, Critical Quarterly, 47: 1-10.

JOHNES, Jill, Jim TAYLOR, and Brian FRANCIS, 1993, “The research performance of UK universities: a statistical analysis of the results of the 1989 research selectivity exercise”, Journal of the Royal Statistical Society, Series A (Statistics in Society), 156 (2): 271-286.

KELLY, Aidan, and Roger BURROWS, 2012, “Measuring the value of sociology? Some notes on the performative metricisation of the contemporary academy”, in L. Adkins and C. Lury (eds.), Measure and Value: A Sociological Review. Oxford, Wily-Blackwells, 130-150.

LOCK, Grahame, and Herminio MARTINS, 2011, “Quantified control and the mass production of ‘psychotic citizens’ ”, EspacesTemps.net, available at <http: /  / espacestemps.net / document8555.html> (last access 2014, May).

MONATERSKY, Richard, 2005, “The number that’s devouring science”, The ­Chronicle of Higher Education, Section: Research & Publishing, 52: 8, available at <http: /  / chronicle.com / article / The-Number-That-s-Devouring / 26481> (last access 2014, May).

SAETNAN, Ann Rudinow, Heidi Mork LOMELL, and Svein HAMMER (eds.), 2011, The Mutual Construction of Statistics and Society. London, Routledge.

SHORE, Chris, and Susan WRIGHT, 2000, “Coercive accountability: the rise of audit culture in higher education”, in Marilyn Strathern (ed.), Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy. London and New York, Routledge, 57-89.

STRATHERN, Marilyn, 2000, “Introduction, new accountabilities: anthropological studies in audit, ethics and the academy”, in Marilyn Strathern (ed.), Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy. London and New York, Routledge, 1-18.

Haut de page

Pour citer cet article

Référence papier

Caroline Knowles et Roger Burrows, « The impact of impact »Etnográfica, vol. 18 (2) | 2014, 237-254.

Référence électronique

Caroline Knowles et Roger Burrows, « The impact of impact »Etnográfica [En ligne], vol. 18 (2) | 2014, mis en ligne le 09 juillet 2014, consulté le 16 avril 2024. URL : http://journals.openedition.org/etnografica/3652 ; DOI : https://doi.org/10.4000/etnografica.3652

Haut de page

Auteurs

Caroline Knowles

Department of Sociology, Goldsmiths, University of London, UK
c.knowles@gold.ac.uk

Roger Burrows

Department of Sociology, Goldsmiths, University of London, UK
r.burrows@gold.ac.uk

Haut de page

Droits d’auteur

CC-BY-NC-4.0

Le texte seul est utilisable sous licence CC BY-NC 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search