- * Nous remercions Steve Brown pour la relecture des textes en anglais, ainsi que Jacques Delaforge po (...)
1The papers in this volume reflect a wide range of ways in which measurement is implicated in education, its systems, its processes, its accountabilities. These to a large extent reflect both existing and historical constructions and uses of measurement and how it represents and affects both the practices of education and our understanding of them. In this paper I will try to draw out some broader consequences of the increasing use of indicators in education. I shall do this by looking at the place of indicators (a term I shall use in this paper generically, to apply to all elements that include quantification of qualitative issues, such as benchmarking, etc.) in the EU’s Open Method of Coordination (OMC), which this the key vehicle for the impleumentation of the Lisbon objectives, which set the target for “Europe” to become the most dynamic and competitive knowledgebased economy in the world, capable of sustainable growth, more and better jobs and greater social cohesion’. Education is given a crucial role in the achievement of these objectives, which it is stated can only be met at Community and not national level. I shall argue that this process entails, especially through the medium of the OMC, the construction of a distinct European Education Space and that the extended use of indicators is crucial to this construction. The paper will proceed as follows. First, I will briefly rehearse arguments around the changing political terrain in which new uses of indicators have emerged. I shall then outline the basic nature of the OMC and discuss some issues its development has exposed. I shall then offer two brief examples of its use in education and finish the paper with a short discussion of how indicators achieve the effects we perceive in education.
2It is important to recognize that the OMC may be seen as the latest stage (and probably not the culmination) of a series of significant changes in the focus, scope and purpose of social indicators over the last decade or so, reflecting, but also assisting in disembedding, and promoting the change from, the “traditional” European social model/Welfare state. Such changes in the nature and use of quantitative indicators of social and economic status and change are not, of course, new, as many of the contributions to this volume make clear. Indeed, the development of social statistics is intimately linked with the development not only of social science but of the modern state itself. However, what we may now observe are shifts in the nature of indicators, associated with changes in the nature of the state, the forms of social science and the nature of their relationship (this is most clearly illustrated in the effective mandating of quantitative methods, and repudiation of qualitative methods, in federally funded educational research in the US).
3We suggest that there have been two especially significant changes over this period. One is a change in the nature of the state, which may be summed up most succinctly as a shift from government to governance (Dale, 1997; Jessop, 2002). What this points to is that the national state is no longer the only body involved in governing, or solely responsible for the performance and coordination of all the activities of governing. This is reflected in the prevalence of the discourses of New Public Management most effectively summarized in the motto “steering not rowing”, which assumes that the state has been involved in things it should not be involved in, and doing ineffectively and inefficiently the things that it has been doing. The other major feature of the changes to governing over the past decade and a half has been a shift in scale in some activities traditionally carried out at national level. This reflects changes in the scale at which economic activities are carried out and attempts to respond to this by setting up forms of governance at similarly supranational scales. And the EU is clearly the most prominent and fully developed of these responses.
4Thus, very briefly, what has been entailed in terms of indicators has been a shift away from relatively indirect indicators of national wellbeing, based largely around indicators of employment levels as a proxy for “welfare”, from which it was assumed that “social cohesion” would follow. These indicators were associated with the view that the government’s job was to keep the maximum number of people in employment, with “dole” payments related to employment assumptions, and approximating them as far as possible. That is, the aim was “welfare”. The key point about the kinds of indicators used here is that they reflected the past/historic status of the national society, indirectly, through the proxy of employment.
5More recently, indicators have come to represent targets for, rather than reflections of the “actual” state of well-being in a society. This shift from the use of indicators as informing policy to directing policy, and from cognitive to normative uses of them, is evident in a number of areas, and in education is especially associated with the growth in the use of comparison as a basis for competition (see for instance, the changes in the basis of UNESCO statistics (Cussó, forthcoming) and the OECD’s PISA (Programme of International Student Assessment, see Steiner-Khamsi, 2003). We might best clarify the point here by means of a table comparing indicators over the Welfare State and post Welfare State periods. The table briefly summarises the change in the focus, scope and purpose of social indicators.
TABLE 1: Changes in types of Social Indicators
FROM
|
TO
|
Reflection of national social cohesion
|
Target for and/or Benchmark/Indicator of International Comparison
|
Past-oriented
|
Future-oriented
|
Providing Report on existing situation
|
Providing framework for future policy directions
|
Relatively Indirect; e. g., Employment levels as proxy for social cohesion
|
Direct comparative base for international measures of social cohesion
|
Unemployment the key issue
|
Social inclusion/exclusion the key
Issue
|
Solution-political measures against unemployment
|
Solution-separate/ad hoc responses to “causes” identified in multiple “direct” indicators
|
National Census the model
|
Eurostat/Eurobarometer/Eurydice the model
|
6These shifts have certainly not passed education by. As has been very widely remarked on, (see, e. g., Strathern, 2000) the “audit culture” has deeply penetrated the higher education sector in many countries, but equally important has been the increasingly self-conscious “tailoring” of indicators to achieve particular ends and to bring about particular kinds of changes at all levels of education systems. This kind of shift, which goes well beyond the use of indicators as accountability mechanisms, is typified by the OECD PISA project, with its explicit aim to change the basis and focus of school education from knowledge to competence, from a focus on the content of what is learned to one on the ability to apply it in “everyday situations”.
7However, perhaps the most ambitious and extensive programme of indicators to be initiated so far is that set in motion by the EU to ensure the delivery of the Lisbon objective for “Europe” to « become the most dynamic, competitive, knowledge-based economy in the world, capable of sustained growth with more and better jobs and greater social cohesion ». The chosen means of achieving these objectives in the field of education, as well as many others, is the Open Method of Coordination, which relies almost exclusively on the use of indicators and benchmarks. Indeed, it may be argued that the OMC would not have existed or been thought possible without the decade-long developments of indicators across the education field. It was in a sense enabled by them and represents a significant extension of their use. Most important in our current context, it represents possibly the furthest-reaching and most expansive attempt to use indicators, both in terms of scale, which covers the whole membership of the EU and seeks to subject it to a single measurement regime, and in terms of its scope, which is not limited to comparing the Member States and making them commensurable, but also includes the development of common policies and benchmarks as well as monitoring and audit. In terms of the Table above, we might say it represents a shift from the production of statistical accounts of individual nation state histories to the setting out of the basis of a collective European future. It is, though, crucial to note that neither here or at any point in this paper is this intended to suggest a zero-sum game between “national” and “European” levels. On the contrary, the argument I will advance will stress the continuing importance of both and a central focus will be the nature of the relationship between the scales as it is framed by the OMC.
8As stated in the Bulletin on the Conclusions of the Portuguese presidency (European Commission, 2000), ‘the open method of coordination, which is designed to help the Member States to progressively develop their own policies, involves:
9– fixing guidelines for the Union combined with specific timetables for achieving the goals which they set in the short, medium and long terms;
– establishing, where appropriate, quantitative and qualitative indicators and benchmarks against the best in the world and tailored to the needs of different Member States and sectors as a means of comparing best practice;
– translating these European guidelines into national and regional policies by setting specific targets and adopting measures, taking into account national and regional differences;
– periodic monitoring, evaluation and peer review as mutual learning processes (emphases added) (European Presidency, 2000: para. 37).
10The key arguments for the designation of the OMC as the vehicle for the implementation of the Lisbon agenda might be seen in its perceived ability to get around five challenges that effectively ruled out the use of the Community method, or harmonization. These challenges are:
11– the challenge of « politically sensitive areas » (see de la Porte, 2002), where the use of the existing method would be impossible (and where « the OMC has rapidly become a virtual template for EU policy making » (Zeitlin, 2002)). These include pensions, social inclusion and employment, as well as education;
– these are also areas where national interests are very strong, and where “Europe” would be resisted powerfully if it sought to intervene in the traditional ways. As Jacobsson (2001: 2) puts it, « the European integration project has now reached a phase where the core areas of the welfare state are directly affected… In these areas supranational decision making has not met support »;
– the challenge of areas where there is no Treaty mandate for European level action, but where coordinated action is seen to be required. As the former Director General of Employment in the European Commission puts it, « the introduction in the Maastricht Treaty of the subsidiarity principle in combination with the traditional Community method of legislation left the EU to manage an unbalanced agenda, an agenda focused on the promotion of change in the form of a single Market and a Single Currency, but very little room for initiatives to manage change in a socially responsible way » (Larsson, 2002: 16);
– the challenge of the diversity of welfare state models; it is recognized that it would not be possible to devise a single model that would be acceptable to all of the welfare state “families” found in the EU (see Scharpf, 2002);
– combining all these, the challenge of creating a “social Europe”, to find means of market taming – or at least accommodation – to match the market making actions that have dominated the EU to date, as well as responding to the “productive social policy” elements of the Lisbon agenda, which has been seen as representing « a true watershed in the Europeanisation of employment and social policy » (Esping, Andersen et al., 2002), a « turning point » in European social policy (Ferrara et al., 2002) and a « Maastricht for Welfare » (Rhodes, 2000).
12It is important to recognize that in making the OMC the appropriate means through which to secure them the Portuguese presidency regarded the OMC as more than a merely technical device (see Rodrigues, 2002); its introduction also assumes and reinforces the idea (a) that “Europe” is an entity that has authority over these areas; (b) that it is able to act effectively in the absence of the existing “Community method” (that « creates uniform rules that member states must adopt, provides sanctions if they fail to do so, and allows challenges for non-compliance to be brought in court » (Trubek & Mosher, 2003); and (c) that in doing so can “act on itself ”.
13Indeed, Claudio Radaelli sees the OMC as eminently a « legitimising discourse »:
that provides a community of policy makers with a common vocabulary and a legitimising project… with the result that practices that... a few years ago would have been simply labeled « soft law », new policy instruments and benchmarking are now presented as « applications » if not « prototypes » of « the method » (Radaelli, 2003: 7).
Overall, then, we may agree with the felicitous representation of the OMC as signaling a move from « integration by law to Europeanisation through figures » (Bruno et al., 2004).
14There is now a voluminous literature on the OMC (in both French and English; there are, for instance, around 90 papers in the main database held at the European Center at the University of Wisconsin, and my searches suggest maybe twice as many elsewhere). However, it would be neither possible logistically nor appropriate in terms of the aims of this paper, to attempt to survey it comprehensively. Instead, I will focus on two issues of particular relevance here. The first involves distinguishing three broad generic types of purposes for indicators within the OMC and the contribution of education systems to the achievement of the Lisbon objectives.
- 1 Lukes’ first dimension of power concerns decision-making and which party prevails at that (relative (...)
15The first type of purpose is “Europeanisation”. This is a contested term, but there appears to be considerable agreement in the literature that “Europeanisation”, in its commonest and most orthodox sense, involves a focus on the effect of European level processes, regulations, treaties, etc. on the domestic political structures of Member states (MS). This would entail judging the success of the OMC by the degree to which it actually makes possible the achievement of the kinds of policy goals that it was introduced to make possible, and consequently its evaluation on the basis of a means-ends logic of rationality. However, it is likely that this will be a relatively rare purpose for the use of OMC-related indicators, for the very reason that it is essentially associated with the kinds of strategies that the OMC was introduced to replace. Thus, it will not be assumed in this paper that the success of the OMC is to be judged – measured – on the basis of distinguishable “effects” on MS domestic policies. The construction of a European Education Space, for instance, may be at least as important in both agenda-setting and shaping the rules of the game and forming preferences as it is in changing decisions (to apply Lukes’ (1974) distinctions of dimensions of power1).
16Second, if we look at the statements of purpose for the OMC the clearest generic purpose appears to be “Convergence” of education policies across the member states (and possibly beyond, for it is an interesting feature of the EU’s programme in education that many parts of it are made accessible to non-members; this remains the case even after the latest expansion). Furthermore, this purpose is to be achieved without lessening diversity of practice. However, this still leaves open a rather wide range of possible meanings; does it imply convergence of policies, of processes, of outcomes, or all three, for instance?
17The third and final set of purposes we will consider is the least precise and concrete of the three, but in many ways the most significant. We will call this set of purposes « Changing the Scale of Educational Governance ». In essence, they serve to identify “Europe”, in the terms we used above, as an entity with authority, that can act in the absence of the traditional Community method, and that can act on itself. We see this purpose expressed most clearly in the insistence that the Lisbon objectives in education can only be met at the European level, and the continuing emphasis in Commission documents on the crucial importance of working at the European level. Here, we may see the focus, and implicitly the purpose, of the use of indicators shift from the production of concrete and measurable outcomes to installation of sets of practices and processes aimed at enabling changes in the functional and scalar division of the labour of educational governance in ways that make “Europe” a much more active and significant actor in the area of education.
18The second point to be made is that the OMC processes seem likely to have a “depoliticising” effect. In addition to the national tactical (“blame-shifting”) advantages that the OMC may foster, more fundamentally it makes policy decisions into “technical” matters for long term negotiation between “de-nationalised”/supranational experts, rather than national preferences that have to be defended nationally. In this way it displaces immediate problems, both temporally and spatially; it extends the time horizon over which they are to be addressed and removes the locus of decision to another place. This also makes education policy making at the EU level a matter for technical problem solving between stakeholders within the system, rather than the result of the political resolution of political conflicts between different interests. Further, the process will tend to converge around the economic interests of the already strong, rather than around their own or anyone else’s political priorities. Finally the OMC will tend to operate on the basis of proscription rather than prescription; that is to say, it will tend to patrol the boundaries of the possible rather than defining precisely what the territory thus defined should contain.
19So, indicators and benchmarks may be seen as simple, transparent, flexible, non-directive, quantitative mechanisms that allow for considerable latitude in involvement and interpretation – all of which make them politically very attractive mechanisms of rule, nationally or supranationally. It will be useful, therefore, to examine the nature and uses of indicators and benchmarks a little more closely and more critically than is typically the case.
20I will now illustrate briefly some examples of how these developments work in practice. Two sets of examples are used. One consists of a consideration of some features of the Working Document developed to guide the implementation of the goals for Education and Training following the Lisbon agenda (Commission of the European Union, 2003). The other draws on an empirical account of how “best practice” is “discovered” and determined in the consultation processes generated by the OMC.
21As its title suggests, the purpose of the paper is to give some substance to the ways that education is to contribute to the achievement of the Lisbon goals. It is, thus, a most appropriate place to seek examples of how the use of La mesure en éducation as part of the OMC brings about the kinds of changes that I have been suggesting it will.
22The first point to be made is the emphasis laid in the opening paragraphs of the document on the nesting of the Detailed Work Programme (DWP) within the overall goals of Lifelong Learning, and on the need to develop new competences required for the Knowledge Economy called for at Lisbon in and through education. Neither of these has traditionally had a central position in the education systems of MS, and together they immediately reflect an intention to form a new agenda as well as, or rather than, adjusting existing agendas in directions implied by Lisbon.
23Beyond this indication of intent, we encounter some well known problems with indicators, such as the tendency to simplify and distort “reality” in making it quantifiable, and the consequent recognition that despite claims to the contrary, indicators are not and cannot be “neutral”. Thus the choice and specification of indicators is always a crucial and a political act, that gives great power to the body controlling them – in this case, effectively the European Commission.
24However, as well as evidence of these generic problems with indicators, we find examples of their use to extend the scope of their use, something that may be seen as “agenda amplification”. There are at least five clear examples of this in the document:
25– the discussion of the use of “best practice” in the area of the new competences; it is difficult to see how there might be examples of best practice in a new area, which makes entirely plausible the conclusion that in this case what is to count as best practice is being newly constructed, as a new and specific goal, rather than representing the optimum combination of existing practices;
– while the issue of what might constitute competences is not addressed directly, it is introduced in a sense through the back door, with the use of data from the OECD’s Programme of International Student Assessment (PISA) as the basis of comparison between member states’ performance in mathematics, science and literacy. As was pointed out above, the PISA work compares not the traditional “achievement” in these areas (such as was the focus of the IEA studies of comparative attainment) but quite explicitly emphasizes and test students’ competence. Indeed, it is this that has underlain the extent and depth of individual countries’ reactions to PISA (see, e.g., Steiner Khamsi, 2003; Allmendinger & Leibfried, 2002);
– the introduction of the need for “entrepreneurship education” across the board, without any basis of justification intrinsic to the OMC itself;
– the introduction in the discussion of the goal of “making better use of resources” of the need to extend the contribution of the private sector, another clear attempt to install policy changes through an apparently neutral set of indicators;
– the effective construction of a set of indicators in areas for which no best practice existed, again effectively installing not just a set of indicators but outlining a policy framework for that area. The best example of this is the area of ICT and education.
26It is clear that much of the work of construction of indicators (and hence of the definition of areas and of policy) takes place in the committees of experts charged with that responsibility, and we are fortunate that there exists a quite detailed ethnographic account of the processes of what is to count as “best practice” in the area of ICT and education (Tsatsaroni & Zografou, 2004). Significantly, in terms of the arguments made above, Tsatsaroni and Zografou point out that this response is built around the principle of Lifelong Learning and it puts the Open Method of Coordination and “learning from one another” at the heart of that contribution, on the principle that « the changes and reforms defined nationally become more effective when they draw on the factors which determined this success » (Tsatsaroni & Zografou, 4, quoted from Commission Staff Working Document on the Implementation of Education and Training 2010’). Tsatsaroni and Zografou’s paper is based an a close reading of key documents produced as part of that response, and on the basis of this they suggest that « while different conceptions of “practice”, drawn from diverse resources, traditions and discourses, are being mobilised in this political programme, its meaning has been stabilised in a common ground that connects the more obvious uses and explanations of practice to its scientific definitions in the ICT and Education research field » (2004: 2).
27Tsatsaroni and Zografou describe in considerable detail the processes through which different working groups operate around the Work Programme; this level of detail is too great to address adequately here, but it does enable us to point to some very significant insights. A major focus is on the EC’s coordination of the process and their use of external consultants, and much of what Tsatsaroni and Zografou have to say echoes the findings of other similar studies of the OMC, for instance in its “disruption” of national assumptions and schedules (see e.g., Jacobsson, 2002). In this case they emphasise the importance of the Annual Reports to be provided by the working groups, and point out that though the eight areas covered by different working groups in Education and Training 2010 differ significantly in their coverage, « what makes these reports similar (…) is that they are, first, encouraged and expected to relate horizontally both at the national and the European level ». What was important for the Commission, they suggest, is that « the OMC and the resulting reports were/are the means through which a “platform of cooperation” has been established between the EU and its member Nation States in the area of education and training as well as beyond it » (8, emphasis added). In particular, they found that « the progress reports of the groups are used to: take stock of the situation (…); to allow the broadest possible consensus to emerge between experts and partners in the area of education and training on present and future action (…); and achieve synergy with the already existing programmes and projects that address similar issues to those included in the objectives of the programme » (ibid.: emphasis in original).
28Tsatsaroni and Zografou go on to suggest that in the documents they worked on, “good practice” is presented as an innovative method defined in contradistinction to other, traditional and perhaps outmoded ideas or tendencies and that:
29« The selection of examples of “best practice” in national policy is guided by a common analysis grid... and evaluation is a determining element for judging the quality of the practice selected. Thus through guiding, underlying, explicating and authorising of the OMC the (Commission Working Document) comes to define:
30– what is to be selected, what are proper examples of national policy (re)form(ation);
– the criteria to be applied in the selection;
– the “best policy practices” that will be part of common priorities. »
31What all this adds up to in this context is:
32– the OMC process leads not to the diffusion and/or transfer of best practice but to the creation of new definitions and roles of best practice that are not reducible to the aggregate of MS practices;
– the precise definition of best practices is less important than their emergence from a common platform;
– one consequence of its role in the development of “good practice” is to demonstrate EC competence in the area of education policy;
– the EC proceeds not through policy transfer or imposition but by consensual, non-binding joint problem identification, one of whose central features is that it always includes “Europe” in the identification. It does not rule out different national problem identifications, or necessarily lead to policies, and it may be backed up by benchmarks;
– what then is involved is a collective identification of common (European) problems (that can only be solved at the European level), that is, new problems, or old ones reframed by shifting their scale rather than policy transfer. It involves the overriding of national differences by common definition and indicators, rather than accommodation to them. “Europe” here becomes less an external context with the potential to affect national policies and more a common space where MS (under the coordination of the EC) shape and frame not only distinct policies, but, as I have argued elsewhere (Dale, 2004b), a parallel education sector;
– the question then becomes as much how the problem is framed and addressed as how successfully it is achieved. Europeanisation framed as “effects on” domestic policy captures some but not all of what is going on (see Dale, 2004a).
33While there has been a considerable amount of work on the relationship between indicators and practices, there have been fewer attempts to suggest the processes through which those changes are brought about, and it may be worth considering some different “theories of change”, of how they may lead to changes, contained (usually implicitly) within them (a stimulating discussion of these relationships is provided by Ray Pawson, who refers to the theories of change as “programme ontologies” (Pawson, 2002).
34We will point to nine theories, operating at different levels, of how OMC indicators might bring about the desired changes.
35– 1. The first is what Claus Offe has called “unlearning”. He argues that successful policy learning through the OMC also involves “unlearning”, and partially demolishing entrenched (national) institutional patterns; and he goes on to argue that « Such “unlearning” may in fact be the main purpose of the OMC, or its hidden curriculum. The main purpose of this method of policymaking seems to be that of bringing home to member states’political elites and constituencies the need for “modernization” and “recalibration” of their hitherto adopted arrangements of social security, industrial relations and labour market policies » (Offe, 2003: 463).
– 2. What might be called “process disruption and dislocation”. It involves the imposition of common patterns of practice and common schedules on member states with a range of different traditions in those areas, a process referred to as European “deadlineification” (quoted in Jacobsson, op. cit.). They may not replace existing schedules and rhythms, but they do not succumb to them, either, presenting considerable pressure on some MS to alter their procedures to accommodate to the regional patterns.
– 3. The system of common indicators across the EU changes the bases of comparison and calculations of progress of MS and their education systems. As was implied in Table 1, it shifts the basis for calculating progress from a national/vertical/specific/temporal one where progress is measured against what has been achieved in the national system historically, to a transnational/horizontal/collective/spatial one where the basis of comparison becomes much more general and related to a constructed multinational average, benchmark or best practice.
– 4. Bringing about attachment to what the indicators entail is what we might call “reputational threat through public exposure”, better known as “naming and shaming” (and sometimes “faming”). This is often taken to be the means through which league tables of all kinds work.The effectiveness of this strategy has been quite widely commented upon (see, e.g., Jacobsson, 2003).
– 5. What might be referred to as “bestowing legitimacy”. This might be done through mimesis, which is associated with ideas of “best practice”. In periods of rapid change, or when confronted by new challenges, for instance, having a model or basis for following a course of action, or policy, may be very persuasive, and this may lead MS, especially perhaps the acceding countries, to welcome provision of such models. Alternatively, as Power has argued, one key element of a process like the OMC entails having to demonstrate a commitment to the process itself; as he puts it, « submission to audit is a benchmark of institutional legitimacy » (Power, 1997: 17).
– 6. Similar in a sense to naming and shaming is what we might call the “treadmill”, or the “pressure for continuous improvement”. We see a very good example of this in practice in “models” whose expectation of periodic, improving, iteration is of key importance in how they work.
– 7. Making from indicators a “Procrustean bed”, or the “elaboration of standards to which conformity is expected”. This may be backed up by the assertion, or implication, famously associated with Mrs Thatcher, of TINA, There Is No Alternative.
– 8. Most explicitly in much OMC (and other EU) literature, we have the method of peer pressure, or review. The objectives of the EU’s peer review process been described as « really a method of integrating and harmonizing policies in order to obtain convergence across countries and ultimately to have a single policy process » (Mr Vinci, Head of OECD Economics department, in OECD, 2002). This is especially employed as a means of mutual deterrence and remedying of “backsliding” through the mutual monitoring of shared commitments.
– 9. In discussing methods of “soft governance” – of which the OMC would be a key element – as means of bringing about the « subtle transformation of states » (2003) the Swedish sociologist, Kerstin Jacobsson refers to a set of what she calls « discursive regulatory mechanisms », that have « a more subtle impact » (2003: 2-3), and the form of governance that they together constitute. These mechanisms are: joint language use; common classifications and operationalisations; building a common knowledge base; strategic use of comparisons and evaluations; and the systematic editing and diffusion of knowledge and evaluation results, together with peer pressure and time pressure (2003: 9; a somewhat different formulation appears in 2001: 8ff). However, when she suggests that what these mechanisms add up to is « a systematic system of governance with the potential to transform the practices of member states and thus add to the integration process » (2003: 2; emphasis in original), her argument comes much closer to the idea of changing scales of governance.
36In this paper I have focused on the Open Method of Coordination, which I suggested may be seen as one response to changing patterns and scales of the governance of nation states in Europe. In particular, I have suggested that the OMC may be seen as providing a significant extension of EU authority over and direction of MS education systems and practices, and that the mechanisms through which these have been achieved rest essentially on the extended applications of indicators. I have argued that this process is not to be judged (at least not wholly) in terms of the calculable effects of the OMC on individual MS education policies and practices, but through the extension of the authority and discretion of the EU into MS education policy making. And I have suggested elsewhere that this might be seen to involve the development of parallel “education” sectors at national and regional levels, which will provide the basis for a functional and scalar division of the labour of educational governance in Europe (see Dale 2004a; 2004b). Thus, the OMC carries immense substantive and theoretical importance, both because it represents the most ambitious, systematic and far-reaching project of governing through indicators that has yet been undertaken, and because it therefore presents a significant and worthwhile challenge and basis for developing accounts of how indicators (are intended to) work.