Skip to navigation – Site map

HomeIssues3:2Closer than they look at first gl...

Closer than they look at first glance: A systematic review and a research agenda regarding measurement practices for policy learning

Pierre Squevin, David Aubin, Éric Montpetit and Stéphane Moyson
p. 146-171

Abstract

Learning is a cognitive and social dynamic through which diverse types of actors involved in policy processes acquire, translate and disseminate new information and knowledge about public problems and solutions. In turn, they maintain, strengthen or revise their policy beliefs and preferences. Despite the conceptual and theoretical developments over the last years, concerns about the measurement of policy learning remain persistent. Based on the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) approach, this article reports the results of a systematic review of the existing practices for measuring policy learning in the public administration and policy research. In addition to operationalizations, data sources, methods of analysis and levels of analysis, we examine how the reviewed articles deal with the processual nature of policy learning. We show that the existing measurement practices transcend the research streams on policy learning for the most part, which extends the argument developed by Dunlop and Radaelli (2018) that policy learning is an analytical framework of the policy process. Based on these results, we argue for more transparent operationalizations, discuss the strengths and weaknesses of direct and indirect measurement approaches, and call for more creativity in designing measurement methods that recognize the multilevel nature of policy learning.

Top of page

Full text

Earlier drafts of this paper were presented at the International Workshops on Public Policy of the International Public Policy Association (University of Pittsburgh, 26-28th June 2018), as well as at the Annual Conference of the European Group for Public Administration (Université de Lausanne, 5-7th September 2018). Nine policy learning experts also accepted to respond to a short methodological survey: while we take full responsibility of the contents presented in this article, we are grateful for their useful insights.

Introduction

1Policy processes involve policy actors ranging from politicians and public officials to managers of public and private companies, members of interest groups (i.e., stakeholders, lobbyists, users, etc.), academics, consultants or active citizens. Policy actors acquire, translate, and disseminate new information and knowledge, flowing from interactions, experience, and from the accumulation of evidence on policy problems and solutions (Heikkila & Gerlak, 2013). As a result, they revise or strengthen their beliefs and preferences regarding policies over time. This cognitive and social dynamic of belief updating is known as ‘policy learning’ (Dunlop & Radaelli, 2013; Moyson et al., 2017). The purpose of this article is to systematically review the existing practices for measuring policy learning in the public administration and policy research.

2We find that these practices are closer than would be expected in a research field that has been split into different networks of researchers and publications with little dialogue between them (Goyal & Howlett, 2018). Indeed, we observe many commonalities in the measurement of policy learning, which supports the argument developed by Dunlop and Radaelli (2018) that policy learning is an analytical framework of the policy process. Despite this general diagnostic, however, there remains some methodological variation, which suggests that greater dialogue between research streams may help address different methodological avenues for future studies on policy learning.

3While the conceptual and theoretical developments about policy learning have been considerable in recent years, the concerns about the measurement of policy learning remain persistent. Heikkila and Gerlak (2013, p. 502), for example, argue that ‘the starting point of learning research that (they) recommend is through clear operationalization and measurement.’ In their analysis of the literature on learning about environmental policy issues, however, Gerlak et al. (2018) concluded that almost one third (31%) of the reviewed studies used learning only theoretically, i.e., as an assumption to examine governance processes, without further developing specific measurement strategies. A similar complaint about the lack of empirical evidence on the actual processes of knowledge transfer and acquisition from one institutional setting to another was made by Dolowitz (2009, p. 319).

4One important pitfall of mediocre measurement methods is that they incorrectly assume the occurrence of learning where it does not actually occur. Theoretically, ‘censoring’ learning (Dunlop & Radaelli, 2018) involves the incorrect attribution of policy changes to putative belief updates instead of recognizing the effects of other factors (power, context, events, institutions, social interactions, etc.). The inability to distinguish the null hypothesis of non-learning from the alternative hypothesis of learning (Radaelli, 2009) also prevents scholars from clarifying the relations between learning and its outcomes.

5Several literature reviews of policy learning research exist. However, they focus either on conceptual and theoretical issues (e.g., Gerlak et al., 2018; Riche et al., 2020), or address methodological questions while focusing on one specific stream of policy learning research (e.g., Dolowitz, 2009; Maggetti & Gilardi, 2016), with one exception: based on the brief methodological overview of policy learning research resulting from their bibliometric analysis, Goyal and Howlett (2018, p. 39) have called for more policy learning research relying on comparative designs and on the richness of secondary data.

6How is and should policy learning be measured? While there is a consensual dissatisfaction about the measurement of policy learning, a comprehensive review of the existing measurement practices for policy learning is still lacking. This article fills this gap with a systematic review of 53 peer-reviewed articles, based on the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) approach (Moher et al., 2009). This approach combines the strength of applying a systematic search for relevant articles in the existing literature with the clarity provided by analyzing the research methods within the searched articles. This analysis, in turn, allows the identification of patterns, as well as of promising or underexplored avenues for measuring policy learning in future research.

7This article is organized into three sections. In the first section, we present the main research streams about policy learning distinguished in the literature, along with the main issues related to the measurement of such a social science concept. In the second section, we describe the methods and scope of the review. In the third section, we examine the existing research practices to address the issues related to the measurement of policy learning in the reviewed articles. We conclude with several suggestions for future research.

Policy learning: research streams and measurement

  • 1 In addition, the so-called ‘managerialist’ approaches rely on insights from the political and organ (...)

8Two main streams of research about policy learning are typically distinguished (Moyson & Scholten, 2018)1. The first stream includes all approaches that focus on policy interdependencies between institutional settings (or territorial units). Policy transfer is the process through which decisions in one setting are made based on decisions previously made in others (Dolowitz & Marsh, 2000). Policy diffusion research (Marsh & Sharman, 2009) is interested in transfer patterns: it often relies on large-N studies to compare regulations and economic performances across countries in a context of interdependency (e.g., Simmons & Elkins, 2004). Finally, policy transfer and policy diffusion can (e.g., Elkins & Simmons, 2005) or cannot (e.g., Radaelli, 2005) result in policy convergence, defined as “any increase in the similarity between one or more characteristics of a certain policy (e.g., policy objectives, policy instruments, policy settings) across a given set of political jurisdictions (supranational institutions, states, regions, local authorities) over a given period of time” (Knill, 2005, p. 5).

9Policy learning plays a central role in policy interdependency (Gilardi, 2010; Meseguer, 2004; Volden et al., 2008). Learning achieves preferred outcomes by processing information on policies adopted in other settings. Information will be deemed even more relevant if there is a perception that a policy is successful (Böhmelt et al., 2016, Maggetti & Gilardi, 2016). ‘Lesson drawing’ (Rose, 1991) involves an ideal-typical form of learning and refers to the will, effective cognitive ability, and practical capacity to draw lessons from other institutional settings to meet a given objective (e.g., Gilardi et al., 2009). However, policy transfer, diffusion, and convergence can also result from other mechanisms such as: economic competition, i.e., the adoption of similar policies– for example building uniform infrastructures - encouraged by positive spillovers; coercion or obligation, e.g., made by a hierarchically superior unit of government or by an international organization; contagion, i.e., the ‘transmission of attributes between units because of contact or proximity (Malang et al., 2019, p. 1482); and imitation or social influence, i.e., the ‘adoption of an attribute by a unit due to the perception of popularity of attributes among the other units’ (Malang et al., 2019, p. 1482; Shipan & Volden, 2008; Simmons et al., 2008). To make it even more intricate, all these mechanisms of policy interdependency interact with each other (e.g., competition and learning: Böhmelt et al., 2016).

10In the second stream of research about policy learning, the ‘social learning’ approaches focus on the cognitive and social mechanisms through which policy actors in one institutional setting manage uncertainty and complex ideas to make policies. According to Heclo (1974), ‘politics finds its sources not only in power but also in uncertainty – men collectively wondering what to do […]. Policy making is a form of collective puzzlement on society’s behalf; it entails both deciding and knowing […]. Much political interaction has constituted a process of social learning expressed through policy’ (Heclo 1974, p. 305-306). Three well-known and established social learning approaches may be distinguished: ‘epistemic communities’ as put forth by Haas (1992), ‘social learning’ as developed by Hall (1993) and the ‘advocacy coalition framework’ developed by Sabatier and Jenkins-Smith (1993). All of them emphasize the role of ideas in explaining policy change. Much like policy interdependency, policy change within a policy subsystem does not result only from social learning, but also from other factors or mechanisms, such as the political strategies of competing (coalitions of) policy actors, compromises facilitated by ‘policy brokers’ (Ingold & Varone, 2012), or external shocks (e.g., economic crises or natural disasters: Sabatier & Jenkins-Smith, 1993). Policy learning can also ‘fail’ by leading to groupthink, spurious consensus, unstable outcomes or no change at all (Dunlop, 2017).

  • 2 While research ontology and research epistemology could also be considered methodological issues, t (...)

11How can policy learning be measured? Measurement typically revolves around several key issues which will be addressed in this article2. First, as far as operationalization is concerned, a concept has a multilevel structure composed of a basic level, secondary-level dimensions, and third-level indicators (Goertz, 2006). Second, diverse data sources ranging from documentary sources (e.g., reports or newspapers) to human sources (e.g., interviews or surveys) may be distinguished. Third, those data can be analyzed through various methods – qualitative, quantitative, or mixed methods (McNabb, 2017). Fourth, policy learning can be analyzed at various levels; micro analyses are concerned with the individual foundations of policy learning, while meso analyses examine the social dimension of policy learning and macro analyses focus on aggregation at the organizational, institutional or societal levels (Dunlop & Radaelli, 2017). Finally, policy learning is a mechanism, i.e., a processual concept. Sabatier (1987; 1993) argued that the ‘enlightenment function of knowledge (Weiss, 1977), as well as the effect of policy learning on policy change (see Moyson et al., 2017) become effective only after periods of ‘a decade or more’. The measurement of policy learning involves, by definition, the analysis of updates (or stability) in beliefs or preferences between t0 and t1.

Methods and scope of the review

12The systematic literature review presented in this article draws upon the PRISMA approach (Moher et al., 2009). The items included in PRISMA ensure the transparency and replicability of the review process. This approach, which was initially developed in biomedical research, has been successfully used for other recent systematic reviews in public administration (e.g., Riche et al., 2020). The search strategy, eligibility criteria, study records and variables used in our systematic review of policy learning research are presented in the remainder of this section.

13We searched for publications in which the title, abstract or keywords contained the words ‘policy’ and ‘learning’, published in the Web of Science between 2000 and 2019, within the two following subject areas: ‘public administration’ (47 journals) and ‘political science’ (170 journals). In this initial list of 1273 records, we retained only peer-reviewed articles written in English with, at least, some focus on policy and/or on learning based on the title and abstract, which led to a new list of 232 articles. We did not find any duplicates. To assess the final eligibility of the searched articles, based on the analysis of the full text of the articles, we relied on two criteria. First, we only selected the articles that focused on the cognitive and/or social dynamic related to the updating of policy actors’ beliefs or preferences (Dunlop & Radaelli, 2013; Moyson et al., 2017). For example, many of the excluded articles focused on either the administration or the policies of educational or lifelong ‘learning’ and examined the ‘policy’ implications of the findings. Second, we only selected empirical studies measuring policy learning. Managerialist studies on organizational learning are not covered in this review. The search strategy, which is summarized in Figure 1, was implemented by the first author of this article, which means that intercoder reliability is not an issue.

Figure 1

Figure 1

Source: The Authors

14A final list of 53 articles was obtained from this process. A thematic analysis of the 53 articles (Riche et al., 2020) was performed using a coding grid addressing the research question – how can policy learning be measured? – according to the abovementioned issues related to the measurement of concepts: operationalization, data sources, methods of analysis, levels of analysis, and time. The results of this review are reported in the Appendix.

15This methodological approach has strengths and weaknesses (Moher et al., 2009). We focused on the 2010-2019 period to report on recent research methods. We selected peer-reviewed articles only, with the assumption that this would improve the quality of our results, but this choice can lead to publication biases. This may occur if key methods are only presented in book chapters, even though we believe this to be unlikely. Similarly, language bias can result from our focus on articles published in English. To ensure that this was not a problem, we repeated the same search for articles recorded in Scopus, as well as in Cairn – a database including far more French-speaking articles – with the (French) keywords ‘politiques publiques’ and ‘apprentissage’, which did not provide any eligible results. In the same vein, to our knowledge, there is no complementary research tradition regarding policy learning in other languages in the world. Last but not least, this literature review is concept-driven to the detriment of studies looking at the antecedents, processes and outcomes of beliefs updating in policy contexts without using the concept of ‘policy learning’ as such, e.g.: the study by Meseguer (2006) on ‘rational learning’ and ‘bounded learning’ in the diffusion of policy innovations; the analysis by Galey-Horn et al. (2020) of ‘idea brokerage’ and its role in the convergence of policy preferences about teacher effectiveness in the U.S.; or the discussion by Sabel and Zeitlin (2008) on the role of learning in ‘experimentalist governance’. Overall, the bias risks in the literature review are a tradeoff for systematicity. However, we suggest that our final list of 53 reviewed articles is relatively depictive of policy learning studies, when compared to common aspects of the bibliometric analysis of policy learning research conducted by Goyal and Howlett (2018: e.g., policy sectors, geographical areas, etc.).

Results: existing measurement practices for policy learning

16Concerns about the measurement of policy learning persist but attempts to address them have increased; while only two articles were recorded between 2000 and 2005, 15 were published between 2006 and 2011 and 36 between 2012 and 2019. Geographically speaking, the institutional affiliations of the 64 authors of the reviewed articles show that most of them work in North-American (33) or European universities (23). Other regions, such as Asia (5) and Oceania (2), are far less represented. The distribution of papers per journal appears in Table 1. Interestingly, the 53 reviewed papers were published in only eight out of the 217 searched journals – with 32 (60.4%) of them published in only three academic journals, i.e., Journal of European Public Policy, Policy & Politics, and Policy & Society. Overall, these data suggest that empirical research involving the measurement of policy learning tends to be concentrated in a limited number of geographical areas and scientific publications. That said, only a few scholars appear more than once in the study sample (Radaelli four times; Montpetit, Dunlop and Rietig twice).

Table 1: Distribution of reviewed articles per journal

Journal

Number of reviewed articles

Journal of European Public Policy

15 (28,3%)

Policy & Politics

9 (17,0%)

Policy & Society

8 (15.1%)

Review of Policy Research

7 (13.2%)

Policy Studies Journal

5 (9.4%)

Public Administration

4 (7.5%)

Journal of Public Policy

3 (5.7%)

Local Government Studies

1 (1.9%)

Regulation & Governance

1 (1.9%)

TOTAL

53 (100%)

Source: The Authors

17A majority of the studies, i.e., 43 (81.1%), examined the processes of social learning. The ten (18.9%) remaining studies related to processes of policy learning and transfer. We report the results of our review in the next sections of this article. They are summarized in Table 2.

Table 2a: Results of the systematic literature review

Table 2a: Results of the systematic literature review

Source: The Authors

Table 2b: Results of the systematic literature review

Table 2b: Results of the systematic literature review

Source: the Authors

Operationalization of the concept

18A concept is a multilevel structure composed of a basic level, secondary-level dimensions, and third-level indicators (Goertz, 2006). At the first level, 36 (67.9%) of the articles examined in our review explicitly provide a basic definition of policy learning. In these articles, two dominant definitions of policy learning emerge. On the one hand, nine (17%) of the reviewed studies concurred with Dunlop and Radaelli (2013), who define policy learning as the updating of policy beliefs (Dunlop, 2015, 2017; Kamkhaji & Radaelli, 2017; Lundin et al, 2015; O’Donovan, 2017; Radaelli, 2009; Rietig, 2018; Rietig & Perkins, 2018; Voorberg et al, 2017). This definition has become increasingly common. On the other hand, six (11.3%) of the reviewed studies were based on a second dominant definition of policy learning, which is related to the advocacy coalition framework. In this theory of policy processes introduced by Sabatier and Jenkins-Smith (1993), policy learning is conceptualized with a similar focus on updates in policy beliefs and preferences, as ‘relatively enduring alterations of thought or behavioral intentions that result from experience and which are concerned with the attainment or revision of the precepts of the belief system of individuals or of collectivities’ (Sabatier, 1993, p. 42; Albright, 2011; Bomberg, 2007; Heikkila et al., 2014; Montpetit, 2009; Nohrstedt, 2005; Pattison, 2018).

19In the interdependency approaches to policy learning, the studies undertaken by Rose (1993) on ‘lesson-drawing’ remain central. From this perspective, policy learning may be defined as a ‘systematic process whereby policy makers study developments in other systems in order to evaluate their potential applicability in the home system’ (Nutley et al., 2012). In a similar vein, policy learning may be conceptualized as ‘a dynamic whereby knowledge about policies, administrative arrangements or institutions is used across time or space in the developments of policies, administrative arrangements and institutions elsewhere’ (Casey & Gold, 2005). However, studies also recognize that policies transfers can result from nonrational and nonlinear learning processes in which the context matters (e.g., Dwyer & Ellison, 2009).

20The early research on policy learning (e.g., Bennett & Howlett, 1992, p. 291) suggested that, next to cognitive changes, policy learning may also refer to behavioral changes. However, none of the reviewed articles assumes that policy learning is fundamentally more than a cognitive process. In contrast, a variation across existing studies persists regarding the type of belief changes that occur as a result of learning. Some of the reviewed studies focused on factual beliefs or ‘knowledge’; according to Resh et al. (2014), for example, learning can be assessed by examining ‘the extent to which stakeholders acquire new knowledge relevant for understanding the policy issues being addressed as a result of participating in the collaborative process’ (pp. 586-587). Other studies deliberately focused on policy learning as ‘a cognitive process that features changes in policy preferences’ (Tamtik, 2016, p. 6) or as ‘the use of new knowledge to inform actors’ policy preferences’ (Montpetit & Lachapelle, 2017, p. 197), while others explicitly distinguished changes in factual and normative beliefs or ‘preferences ‘(Moyson, 2017).

21At the secondary level of conceptual dimensions, the predilection of policy-learning researchers for typologies is well reflected in the reviewed articles: 27 (50.9%) of them introduced or relied on typologies of learning. In the context of water governance in the state of California, for example, policy learning is ‘institutional’ when it ‘relates to the development of rules, norms and practices that increase the predictability of interactions and build trust’, ‘strategic’ when it ‘involves developing a greater understanding of each other’s interests and mutual dependencies’, and ‘cognitive’ when it ‘refers to an improved understanding of a problem, its causes, and solutions’ (Conrad, 2015, p. 350). Four types of policy learning may also be distinguished according to the tractability of the policy problem, as well as the certification of actors (Dunlop & Radaelli, 2013). Problem tractability refers to the level of uncertainty related to specific policy problems, while actor certification refers to ‘the authority and legitimacy of some key actors or venues’ (p. 602). There was epistemic learning, for example, between policy actors and a community of experts about the issue of bovine tuberculosis in England (Dunlop, 2017). Reflexive learning, in contrast, predominated when the UK Health and Safety Executive Agency developed its organizational political capacity, as well as its reputation through a better engagement with external actors, such as citizens (Dunlop, 2015). The learning from the regulatory impact assessments (RIAs) in Europe can be ‘instrumental’, ‘political’, or involve ‘cross-national emulation’ (Radaelli, 2009). The learning and diffusion dynamics of social policy across China can be coercive, competitive or pre-emptive, depending on the number of bureaucratic agencies involved in the process, their interests, their institutional maturity, as well as the degree of ambiguity of the policy (Shi, 2012).

22The extent, intensity and direction of belief alterations can also be characterized. For example, in the context of the international governance of whaling regimes, policy learning is ‘adaptive’ when it involves fixing an error, ‘reformative’ when it involves ‘changes in the methodology and objectives under a constant paradigm’, and ‘paradigmatic’ when it results in ‘changes in the methodology, objectives and paradigm’ (Ishii & Okubo, 2014, p. 259). The transfer of the Silicon Valley model to other regions of the world results from ‘no learning’ or a situation of mere imitation, ‘trial-and-error’ learning or ‘gradual changes over time to policies, and temporary solutions’, and ‘adaptive’ learning or a situation ‘where policies are analyzed and then adjusted before they are implemented in the local setting’ (Giest, 2017). The Europeanization of Dutch and Spanish activation policies, through the European Social Fund, was mediated by processes of learning that can be thin or thick (Van Gerven et al, 2014). Finally, Montpetit and Lachapelle (2017) found (and measured) that policy learning is not only a matter of belief change but also results in the reinforcement of existing beliefs about the policies related to hydraulic fracturing (see also Pattison 2018, in the context of climate and energy policy).

23At the third level of indicators, direct and indirect approaches to the operationalization of policy learning may be distinguished (Gerlak et al., 2018). Up to 32 (60.4%) of the reviewed studies adopted a direct approach to policy learning, i.e., they concentrated directly on the measurement of cognitive and mental updates. In the social learning research, for example, ‘the percentage of biotechnological applications on which a respondent indicated having become more or less favorable, as opposed to having an unchanged opinion’, was viewed as an indicator of learning in a survey among policy actors involved in developing biotechnology policy in the EU and Canada (Montpetit, 2009; see also Rietig, 2018; Voorberg et al., 2017). In the research stream on policy interdependency, direct approaches to the measurement of policy learning are also adopted but to a lesser extent. For example, instances of ‘eye-opening’, as well as the various ‘lessons drawn’ reported by policy-makers engaged in policy-transfers from Japanese to Dutch Railways were listed, serving as the main indicators of policy learning (Van de Velde, 2013; see also Lundin et al., 2015; Motta, 2018).

24In the 21 (39.6%) studies that adopted a more indirect approach to the measurement of policy learning, the researchers focused on a series of variables that are theoretically assumed to reflect policy learning (‘proxies‘). In the social learning research, for example, ‘the average number of training events a respondent attended per year during his tenure’ was used by Arnold (2014) as a proxy for ‘structured knowledge acquisition’ in the context of wetland regulation. Similarly, changes in institutional designs and in rule configurations also indicated that learning occurred in policy change processes, as exemplified in a study focusing on urban flood mitigation in Colorado (Witting, 2017; see also Howlett et al., 2017; Panke, 2010, p. 810). In the research stream on policy interdependency, several successful policy adoptions across geographical entities were regularly used as proxies for policy learning (e.g., Kahn-Nisser, 2015; Shipan & Volden, 2014).

25This overview of existing operationalizations of policy learning leads to several conclusions. First, while a stabilization of the notion of policy learning – as a cognitive process of updating factual and normative beliefs – has been observed over time, the number of studies relying on clearly identifiable conceptualizations of learning remains limited. Second, such a stabilization in the secondary dimensions of policy learning is not observed which is probably good because this conceptual flexibility allows the researchers to choose or to create conceptualizations that fit their specific research objectives, even though it restricts the comparability of the research findings. Third, an increasing number of studies are adopting a direct approach to the operationalization of learning with a focus on the measurement of cognitive and mental updates. However, indirect approaches based on proxies for policy learning, such as changes in institutional designs (Witting, 2017), should not be overlooked as they are useful for examining whether policy learning leads to changes in actual policies and institutions (Moyson et al., 2017).

Data sources

26Policy learning has mostly been measured based on interviews, surveys, and documents. Interviews and surveys are the most common methods employed: 23 (43.4%) of the reviewed articles were based on this type of data. The larger category included 15 (28.30%) articles that were based on interviews conducted with diverse policy actors ranging from central decision makers, such as ministers or parliamentary commission members, to representatives of public administrations, interest groups, associations, scientists or politically active citizens (e.g., Dunlop, 2017, Marier, 2009; Zito, 2009). Nutley et al. (2012), for example, conducted interviews with ministers and other decision-makers, with officials from public administrations, such as audit bodies, as well as with members of associations representing local governments. Heikkila et al. (2014) drew on semi-structured interviews with various public and private actors involved in the politics of hydraulic fracturing in Colorado to prove that learning occurred and led to policy changes. Hudson and Kim (2014) conducted interviews with a small number of Korean officials visiting the United Kingdom to examine how they learned lessons from abroad.

27Surveys were used in 8 (15.1%) of the reviewed articles. For example, a survey of participants in marine aquaculture partnerships was used to measure their perception regarding whether they learned new knowledge about marine aquaculture issues (Resh et al. 2014). Similarly, based on survey data, Pattison (2018) shed light on the factors shaping policy learning dynamics in a subsystem of policy actors involved in climate and energy issues in Colorado. We observed only one survey experiment, which involved various levels of cost information from invariant information senders to groups of 1,205 Danish local politicians and showed that ‘information may have a stronger impact on political preferences than well-known determinants such as committee and party affiliation’ (Blom-Hansen et al., 2016, p. 119).

28While documentary sources were omnipresent in the reviewed articles, such sources of data did not always serve the sole purpose of measuring policy learning but rather contributed to contextualizing the research or to recording a policy change. Nevertheless, 16 (30,2%) of the reviewed articles analyzed policy learning with documentary sources. Official documents and reports were predominant, with 9 (17%) studies relying on these sources. Content analyses are likely to indicate stability or shifts in policy actors’ perceptions over time. For example, archival documents and internal reviews of the International Monetary Fund were used by Moschella (2011) to assess the delays between past crises and the lessons drawn from these crises in terms of policymaking (‘lagged’ learning). Fritsch et al. (2017) analyzed 517 impact assessments released in Britain (2005-2011) to examine the drivers – e.g., experience, capacity, guidelines, etc. – of change in their analytical content, which they consider to be indicators of a learning dynamic.

29In the last five years, media analysis of policy learning has become popular. A couple of studies relying on the advocacy coalition framework have tracked belief updates based on media analysis. For example, Lodge and Matus (2014) have shown the (limited) changes in policy actors’ argumentation about badgers in Britain over the period of 1986–2013, despite extensive research on this issue (see also Leifeld, 2013). Similarly, media coverage was used to measure the (limited) amount of policy learning following policy failures (Newman & Bird, 2017; O’Donovan, 2017).

30The use of legal documents (e.g., bills, decrees, etc.) was less common (only 2, or 3.8%, of the reviewed studies). Berglund et al. (2006), for example, deduced from variations in the pace of transposition of EU directives that EU Member States can integrate – i.e., learn – such directives more swiftly into their judicial systems (see also O’Donovan, 2017).

31Secondary sources, in particular, previous academic accounts of past events, were used to assess cognitive updates indirectly in 8 (15.1%) of the reviewed articles. For example, the analysis undertaken by Nohrstedt (2005) on the changes in beliefs about the development of the Swedish nuclear program (in relation to the Three Mile Island accident) relied on the existing studies on this topic. Similarly, Dwyer and Ellison (2009) reviewed the existing research on the ‘Americanization’ of the UK’s active labor market policies to discuss the notions of policy learning and transfer in this specific case and in general.

32Finally, 12 (22.6%) reviewed studies combined multiple sources of data about policy learning. For example, interviews were combined with participant observation and document analysis to assess the extent of policy learning in California’s water governance (Conrad, 2015). The analysis of policy learning in a study about Belgian mental health care reforms was based on 12 semi-structured interviews, an analysis of policy and organizational documents, and direct observations of meetings (Thunus & Schoenaers, 2017). In a similar vein, documents produced by environmental NGOs were analyzed together with five interviews undertaken with some of the organizations’ representatives to examine the teaching and learning dynamics between them and their counterparts from new EU countries regarding environmental policy instruments (Bomberg, 2007). Survey data from the Centers for Disease Control and Prevention were also combined with documentary data from the Substance Abuse and Mental Health Services Administration in the U.S. to find evidence of successful policy emulation in the field of antismoking restrictions targeted toward youths, as an indicator of learning dynamics in policy diffusion policy learning (Shipan & Volden, 2014).

33Overall, our review points to the eclecticism of data sources used to measure policy learning, with several more specific lessons. When examining each research stream separately, we observe a predilection for large-N sources in the policy interdependency studies, which is less prevalent in social learning research – an observation that could be related to the focus of the former on interdependencies among multiple institutional settings.

Methods of analysis

34While qualitative methods are used in 38 (71.7%) of the reviewed studies, only a limited number of these studies are based on a clearly identifiable method. Causal process tracing, for instance, was used ‘to delineate the chain of events leading to specific learning outcomes’ (Conrad, 2015, p. 354; see also Radaelli, 2009). The challenge of capturing the policy learning process has also been addressed by Kamkahji and Radaelli (2017), who completed a ‘plausibility probe’, i.e., ‘an empirical device to test the plausibility of a novel theoretical mechanism: the aim is to prove that the mechanism is at least feasible’ (p. 724). Similarly, congruence analysis can be conducted to assess the explanatory power of two or more theoretical frameworks that account – or not – for policy learning effects (e.g., Scholten, 2017). Finally, a more inductive and grounded theory approach was implemented by Schofield (2004) based on the following three stages: a first stage involving ‘the development of open and axial codes from the interviews’; a second stage of ‘text searching and hypothesizing between codes’; and a third stage involving ‘the development of selective and conditional matrices’ in which ‘learning’ was the core category explaining the other categories (pp. 293-294). Some authors developed their analysis based on single, ‘in-depth’ case studies in which they found evidence of learning (e.g., Albright, 2011, Dudley 2007, Scholten, 2017; Thunus & Schoenaers, 2017), while others conducted comparative studies with small-N cases and showed the different learning dynamics across various contexts (e.g., Giest, 2017, Van Gerven et al. 2014, Voorberg et al. 2017).

35Only 14 (26.4%) of the reviewed studies relied on quantitative analyses to measure policy learning. Regression analyses of survey data were conducted in 10 studies, including one study that used multilevel regression analyses to model learning according to the individual characteristics of policy actors and the characteristics of the networks that they belong to (Resh et al, 2014). Berglund et al. (2006) account for the pace of EU directive transposition based on covariance analyses, in which learning is one covariate among others. 854 actors’ statements from 728 newspaper articles were coded by Lodge and Matus (2014) to trace changes in their support to badger culling in the context of the bovine tuberculosis policy in Britain, as well as the type of arguments used to support their position. In the work undertaken by Leifeld (2013) on discourse network analysis, the actors’ policy preferences and their references to convergent or divergent concepts were coded in 7,249 statements about the German pension policy to map coalitions. Fritsch et al (2017) described and accounted for changes in the analytical content of 517 British impact assessments over time using one-way analyses, t-tests, and principal components analyses.

36Finally, only one reviewed study was based on a methodological approach to policy learning that combined qualitative and quantitative analyses. To account for the degree of political activity of small States in the European Union, Panke (2010) examined the effect of learning in regression analyses with indicators such as the number of years of EU membership or whether a State already had an EU presidency. The results of these analyses were compared with qualitative analyses of interviews conducted with representatives from ministries and permanent representations in Brussels.

37Overall, the results point to a predominance of qualitative methods, as well as to the diversity of approaches within each of these categories. There is a real scarcity of mixed approaches but this should be nuanced with the fact that this literature review is focused on policy learning itself: few studies combined qualitative and quantitative approaches to analyze learning, but some of them combined these approaches when considering all variables. For example, Lodge and Matus (2014) used a quantitative analysis of learning, but a qualitative approach to coalition coordination.

Levels of analysis

38Meso approaches examining the social dimension of policy learning (Dunlop & Radaelli, 2017), with a focus on groups, organizations and collectives in which individuals interact, were dominant (25, or 47.2%, of the reviewed articles). For example, the International Monetary Fund did not fully draw the lessons from the successive financials crises of the 1990s to decisively inform its surveillance policy, but it rather took two decades of incremental adaptation and a major crisis in 2007-2009 for the organization to do so, which typifies processes of ‘lagged learning’ (Moschella, 2011). Many of these studies examined advocacy coalitions (e.g., Heikkila et al., 2014; Lodge & Matus, 2014) or epistemic communities (e.g., Dunlop, 2017; Ishii & Okubo, 2014). More formal organizations engaged in policy learning were also examined, such as (pension) commissions (Marier, 2009), European organizations involved in regulatory impact assessments (Kamkhaji et al, 2017; Radaelli, 2009) or agencies (Zito, 2009). Similarly, Newman and Bird (2017) examined the ability of governments to draw lessons from the policy failures of past governments.

39Micro analyses looking at the micro foundations of policy learning (Dunlop & Radaelli, 2017) were less common, with only nine articles (17.0%) having such a focus. We noticed diversity in the types of individual actors scrutinized by these articles, with most of them focusing on subsystems of actors involved in a policy issue (e.g., Montpetit & Lachapelle, 2017; Pattison, 2018). Other articles examined the members of collaborative networks (Resh et al., 2014), European decision-makers (Rietig & Perkins, 2018), local politicians (Blom-Hansen et al., 2016), first-line managers and street-level bureaucrats implementing public policies (Arnold, 2014; Schofield, 2004) or participants in cocreative processes (Voorberg et al., 2017).

40Macro analyses examining policy learning at aggregate levels were also fewer, with only 13 (24.5%) articles in this category. Horizontal processes of policy learning and transfers between institutional settings, such as states or local governments, were the ones most studied. For example, Dwyer and Ellison (2009) analyzed learning and the transfer of social policies between the US and the UK (see also, for example, Motta, 2018; Nutley et al., 2012; Shipan & Volden, 2014). At the European level, more specifically, Berglund et al. (2006) focused on the transposition pace (i.e., learning) of EU directives by Member States. Similarly, Tamtik (2016) focused on EU Commission-induced learning in the Members States’ research policies. In contrast, Panke (2010) examined whether and how learning improved the ability of small Member States to shape EU policies. Finally, three studies adopted a social learning approach to examine whether and how institutional learning occurred after crises and/or policy failures, including Australian health insurance (Kay, 2017), chronic floods in the Denver Metropolitan Area (Witting, 2017), and whirlwinds in the Midwestern United States (O’Donovan, 2017).

41This section concludes with three remarks. First, there is a relationship between the streams of research and levels of analysis. On the one hand, the social learning studies are interested in the individual and in the social dynamics of learning: they represent nearly all the micro and meso analyses of policy learning. On the other hand, policy interdependency studies approach learning by looking at processes that occur across governmental settings and territorial units : they represent all except three of the macro analyses of learning (Kay, 2017; O’Donovan, 2017; Witting, 2017). Second, policy learning constantly operates at several levels at the same time, which was acknowledged by six of the reviewed articles (11.3%). Albright (2011, pp. 505-506), for example, observed that learning significantly differ at the levels of coalitions and their individual members. Similarly, despite his focus on individual policy actors, Montpetit (2009) recognized that the social dynamics of learning within European and U.S. policy subsystems is quite different. Montpetit and Lachapelle (2017) also explicitly distinguished between the individual-level and subsystem-level analyses of policy learning regarding gas development in British Columbia and Quebec. Third, the methodological implications of the social nature of learning were relatively strong in the reviewed research, with some of the studies explicitly using dyadic data to account for social learning at the meso level (e.g., Howlett et al., 2017) or policy interdependency at the aggregate level (e.g., Kahn-Nisser, 2015; Lundin et al., 2015; Shipan & Volden, 2014).

Dealing with time

42Measuring processual concepts raises at least two data collection issues, i.e., the period considered in the study and the choice for repeated or unrepeated measures over time. As far as the period is concerned, there is great variation among the reviewed studies: with the exception of nine studies in which it was not possible to clearly identify such a period of time, 20 (37.7%) of them focused on periods shorter than ten years; 21 (39.6%) focused on periods ranging between ten and 20 years; and only three (5.7%) studies considered periods longer than 20 years.

43As far as measures are concerned, the distinctions between research methods are smaller . On one side of the spectrum, several studies clearly involved longitudinal data collection and analysis. Leifeld (2013), for example, examined the evolution of German policy actors’ beliefs about pensions based on a longitudinal analysis of their discourses and arguments in media articles. Similarly, Schofield (2004) examined policy implementation in the British National Health Service by performing within-case comparisons based on documents collected at several stages of several policy initiatives. We did not find any instance of repeated data collection from human sources (e.g., longitudinal survey data collection or interviews with the same individual at different points in time). Beyond this, most studies proposed more or less explicit methodological artifacts to deal with the processual nature of policy learning despite the lack of longitudinal data. For example, surveys can be designed to ask self-reporting respondents to compare their thoughts at the time of the survey to their past thoughts (e.g., Montpetit, 2009; Montpetit & Lachapelle, 2017; Pattisson, 2018). Similar research strategies may be applied to interviews (e.g., Rietig, 2018; Rietig & Perkins, 2018).

44The length of research periods and measurement repetition are both related to the choice of data sources. Documentary sources, in particular, seem particularly appropriate for repeated measures over long periods of time. It is worth noting that the three studies examining periods longer than 20 years were based on documentary sources. For example, Lodge and Matus (2014) studied the badger issue in the context of the bovine tuberculosis policy spanning 27 years in Britain based on media analysis (see also Dwyer & Ellison, 2014; Kay, 2017).

Conclusion and suggestions for future research

45While conceptual and theoretical developments about policy learning have been significant over the last 30 years, scholarly attention to the methodological issues related to the measurement of policy learning has remained scarce, with some exceptions (e.g., Maggetti & Gilardi, 2016). Based on the PRISMA approach (Moher et al., 2009), this paper has filled this gap by reviewing the measurement practices of policy learning in the public administration and policy research. More specifically, we have examined research questions, the operationalization of the concept, data sources, methods of analysis, levels of analysis, as well as how the 53 reviewed studies have addressed the processual nature of policy learning.

46We find that, despite their eclecticism, the measurement practices for policy learning are closer than they appear. Indeed, the research field on policy learning may be viewed as divided into several relatively autonomous subfields (Moyson & Scholten, 2018). The bibliometric analysis undertaken by Goyal and Howlett (2018) on policy learning research recently confirmed this diagnostic and shows that many studies in this field do not cite each other. However, Dunlop and Radaelli (2018) suggested that policy learning meets the standards of an analytical framework of the policy process while acknowledging that measurement remains a ‘challenge’ (p. S62). Based on the results of the review, we argue that methodological commonalities transcending subfields also exist to measure policy learning. Admittedly, a predilection for large-N research exists in the policy interdependency studies compared to the social learning studies, and the latter tend to unpack and work on subdimensions of policy learning more often. Beyond this, however, the operationalization, data sources and methods for measuring policy learning, as well as the methods of dealing with time, are relatively similar in both research streams.

47One important exception to this general diagnostic is related to the levels of analysis of learning. Social learning research tends to favor micro and meso analyses of policy learning, even if institutional factors may influence learning dynamics, for example the transformation of individual learning into collective learning (see Witting & Moyson, 2015). In contrast, the policy interdependency studies focus on macro analyses although individual factors can also play a role in this type of policy process (e.g., Gilardi, 2010). In other words, despite our general diagnostic of transversal measurement practices, some specific methodological differences remain which result from the empirical focus of each research stream. This suggests a need to intensify dialogue between streams in order to deal with the following recommendations for future research.

48The importance of transparent operationalizations. Despite the conservative focus of this literature review in which policy learning is central, it is difficult to clearly identify how this concept is defined in several of the reviewed articles. In the other articles, however, the diversity in learning definitions may help adapt the concepts to different research contexts and objectives (e.g., a study of the social dynamics of learning or a study on the role of learning in policy transfers). However, we argue that, at the very least, one clear definition of policy learning per study is crucial to decide whether empirical measurement allows the rejection of the ‘null hypothesis’ that there is no policy learning (Radaelli, 2009). Relating the content of the original definitions of policy learning with more established ones is also a matter of ‘content validity’ (Carmine & Zeller, 1979, pp.  20-22) that is important to contribute to the accumulation of knowledge on this topic.

49Direct or indirect measurement? The measurement approaches involving, for example, interviews or surveys of policy actors, concentrate directly on the measurement of cognitive and mental updates. The multiple strengths of these approaches include, for example, their feasibility or the richness of the results that they provide. At the same time, there is a growing awareness of the potential biases affecting these measurement approaches. Biases can result from the research participants (e.g., social desirability, recollection issues, etc.: Moyson, 2017) or from the researchers when analyzing the data (e.g., confirmation bias when the null hypothesis that non-learning can happen is not properly tested: Radaelli, 2009). Recollection issues suggest that, the longer the policy processes, the more cautiously these approaches should be applied as retrospective accounts can be affected by imprecision or by distortions (Blank et al., 2003; Van der Vaart et al., 1995). Indirect approaches involving, for example, the analysis of documents or media articles address this issue. However, future studies should dedicate more effort to delineating the actual relationships between the indicators of or “proxies” for policy learning that they rely on and the underlying concept of policy learning that they seek to capture. For example, do the statements made in media articles or in documents accurately reflect the content of the actors’ beliefs? Journalists could be tempted to interview people whose statements fit with their willingness to present one side of the story rather than a balanced view about it, independently of the actual distribution and evolution of the policy beliefs in the policy subsystem (see Trumbo et al., 1998). In other words, indirect measurement approaches remove some sources of bias but create new ones. While waiting for a theory capable of addressing the relationship between policy learning and its indicators, the policy research could, at least, more systematically discuss the strengths and weaknesses of the data sources regarding policy learning (e.g., Leifeld, 2013; see also Moyson et al., 2019) or triangulate them more or less explicitly with other data, for example, interviews (Heikkila et al., 2014; Schofield, 2004). Communication science may help (e.g., on media data, Christian, 2017) to interpret the results of such research methods.

50Policy learning as dependent or independent variable. Policy learning benefits from more advanced operationalizations when it is considered as a dependent variable than when it is used as an independent variable (see Table 2). In particular, dichotomous variables that merely reflect the presence or absence of learning are problematic. We suggest opting for categorical or continuous variables and dimensions that operationalize the depth or intensity of learning (e.g., non-learning, low, medium or high intensity), its direction (reinforcement versus revision of beliefs), and its duration, and looking at ‘learning events’ such as eye-openings, surprises, discoveries, etc. Given the low number of studies that view learning as both an independent and a dependent variable, we recommend developing methodological approaches that account for the cyclic dynamic of policy learning (e.g., measurement of feedback loops).

51Levels of analysis. Policy learning should not necessarily be conceptualized the same way at the different levels of analysis (Heikkila et al., 2013). For example, learning can involve belief updates at the individual level, while at the collective level (e.g., a coalition, a subsystem or a network), it can be viewed as the aggregation of individual belief changes or, further, as the emergence of consensus or belief convergence (for a review, see Riche et al., 2020). However, the results of this literature review have shown that only a few articles recognize the multilevel nature of policy learning (e.g., Albright, 2011) and even fewer propose measurement methods adapted to this nature (e.g., Montpetit & Lachapelle, 2017). Future research should introduce measurement methods that explicitly recognize that the policy learning concept is multilevel, including data collected at multiple levels or methods of analysis that account for such a data structure (for an example, see Riche, 2019).

52A call for more creativity. Some methods of analysis for data on policy learning are rare or absent. For example, despite the call by Shafir (2013) for a behavioral approach to policy processes and the growing success of experiments in the public administration and policy research (Jilke et al., 2016), few of the reviewed articles relied on this method. A noticeable exception is Blom-Hansen et al. (2016), who examined the actual effect of new information on Danish local politicians’ policy preferences based on a survey experiment controlling for the power of the information sender. Similarly, longitudinal data collected from human sources – i.e., repeated interviews or surveys with the same actors, for example – were absent from the reviewed studies. Research designs based on this type of data would be desirable, even if the conditions for their funding and feasibility are more difficult to meet.

53Policy learning is a fundamental cognitive process of belief updates about public problems and solutions, but theoretical and practical questions persist about how to model it and about its effect on politics and policies. Keeping the methodological insights and suggestions presented in this article in mind, it is our belief that these questions and doubts will be easier to address with future research.

Top of page

Bibliography

(The 53 reviewed articles are marked with *)

*Albright, E. (2011). Policy change and learning in response to extreme flood events in Hungary: An advocacy coalition approach. Policy Studies Journal, 39, 485-511.

*Arnold, G. (2014). Policy learning and science policy innovation adoption by street-level bureaucrats. Journal of Public Policy, 34, 389-414.

Bennett, C., & Howlett, M. (1992). The lessons of learning: Reconciling theories of policy learning and policy change. Policy Sciences, 25, 275-294.

*Berglund, S., Gange, I., & Van Waarden, F. (2006). Mass production of law. Routinization in the transposition of European directives: a sociological institutionalist account. Journal of European Public Policy, 13, 692-716.

Blank, H., Fischer, V., & Erdfelder E. (2003) Hindsight bias in political elections, Memory, 11, 491-504.

*Blom-Hansen, J., Baekgaard, M., & Serritzlew, S. (2016). Shaping political preferences: Information effects in political-administrative systems. Local Government Studies, 42, 119–38.

Böhmelt, T, Ezrow, L., Lehrer, R., & Ward, H. (2016). Party Policy Diffusion. American Political Science Review, 110, 397-410.

*Bomberg, E. (2007). Policy learning in an enlarged European Union: Environmental NGOs and new policy instruments. Journal of European Public Policy, 14, 248-268.

Carmine, E., & Zeller, R. (1979). Reliability and validity assessment. Beverly Hills, CA / London, UK: Sage.

*Casey, B., & Gold, M. (2005). Peer review of labour market programmes in the European Union: What can countries really learn from one another? Journal of European Public Policy, 12, 23-43.

Christian, S. (2017). Overcoming bias: A journalist’s guide to culture and context. London & New York: Routledge.

*Coletti, P., & Radaelli, C. (2013). Economic rationales, learning, and regulatory policy instruments. Public Administration, 91, 1056–1070.

*Conrad, E. (2015). Bridging the hierarchical and collaborative divide: the role of network managers in scaling up a network approach to water governance in California. Policy & Politics. 43, 349–66.

Dolowitz, D. (2009). Learning by observing: Surveying the international arena. Policy & Politics, 37, 317-34.

Dolowitz, D., & Marsh, D. (2000). Learning from abroad: The role of policy transfer in contemporary policymaking. Governance, 13, 5-23.

*Dudley, G. (2007). Individuals and the dynamics of policy learning: The case of the Third Battle of Newbury. Public Administration, 85, 405-428.

*Dunlop, C. (2017). Pathologies of policy learning: What are they and how do they contribute to policy failure? Policy & Politics, 45, 19–37.

*Dunlop, C. (2015). Organizational political capacity as learning. Policy and Society, 34, 259-270.

Dunlop, C., & Radaelli, C. (2013). Systematising policy learning: From monolith to dimensions. Political Studies, 61, 599-619.

Dunlop, C. A., & Radaelli, C. M. (2017). Learning in the bath-tub: the micro and macro dimensions of the causal relationship between learning and policy change. Policy and Society, 36, 304–319.

Dunlop, C., & Radaelli, C. (2018). Does policy learning meet the standards of an analytical framework of the policy process? Policy Studies Journal, S1, S48-S68.

Dunlop, C., Radaelli, C., & Trein, P. (2018). Learning in public policy: Analysis, modes, and outcomes. Cham, Switzerland: Palgrave Macmillan.

*Dwyer, P., & Ellison, N. (2009). ‘We nicked stuff from all over the place’: Policy transfer or muddling through? Policy & Politics, 37, 389-407.

*Farrell, M. (2009). EU policy towards other regions: Policy learning in the external promotion of regional integration. Journal of European Public Policy, 16, 1165-1184.

*Fritsch, O., Kamkhaji, J., & Radaelli, C. (2017). Explaining the content of impact assessment in the United Kingdom: Learning across time, sectors, and departments. Regulation & Governance, 11, 325-342.

Galey-Horn, S., Reckhow, S., Ferrare, J., & Jasny, L. (2020). Building consensus: Idea brokerage in teacher policy networks. American Educational Research Journal, 57, 872-905.

Gerlak, A., Heikkila, T., Smolinski, S., Huitema, D., & Armitage, D. (2018). Learning our way out of environmental policy problems: A review of the scholarship. Policy Sciences, 51, 335-371.

*Giest, S. (2017). Overcoming the failure of ‘silicon somewheres’: learning in policy transfer processes. Policy & Politics, 45, 39-54.

Gilardi, F. (2010). Who learns from what in policy diffusion processes? American Journal of Political Science, 54, 650-666.

Gilardi, F., Füglister, K., & Luyet, S. (2009). Learning from others: The diffusion of hospital financing reforms in OECD countries. Comparative Political Studies, 42, 549-573.

Goertz, G. (2006). Social science concepts. A user's guide. Princeton, NJ: Princeton University Press.

Goyal, N., & Howlett, M. (2018). Lessons learned and not learned: Bilbliometric analysis of policy learning. In C. Dunlop, C. Radaelli & P. Trein (Eds.), Learning in public policy: Analysis, modes and outcomes. Cham, Switzerland: Palgrave MacMillan.

Haas, P. (1992). Introduction: Epistemic communities and international policy coordination. International Organization, 46, 1-35.

Haas, P., & Haas, E. (1995). Learning to Learn: Improving international governance. Global Governance, 1, 255-284.

Hall, P. (1993). Policy paradigms, social learning, and the state: The case of economic policymaking in Britain. Comparative Politics, 25, 275-296.

Heclo, H. (1974). Modern social politics in Britain and Sweden: From relief to income maintenance. New Haven, CT: Yale University Press.

Heikkila, T., & Gerlak, A. (2013). Building a conceptual approach to collective learning: Lessons for public policy scholars. Policy Studies Journal, 41, 484–512.

*Heikkila, T., Pierce, J., Gallaher, S., Kagan, J., Crow, D., & Weible, C. (2014). Understanding a period of policy change: The case of hydraulic fracturing disclosure policy in Colorado. Review of Policy Research, 31, 65-87.

*Howlett, M., Mukherjee, I., & Koppenjan, J. (2017). Policy learning and policy networks in theory and practice: The role of policy brokers in the Indonesian biodiesel policy network. Policy & Society, 36, 233-250

*Hudson, J., & Kim, B. (2014). Policy transfer using the ‘gold standard’: Exploring policy tourism in practice. Policy & Politics, 42, 495-511.

Ingold, K., & Varone, F. (2012). Treating Policy Brokers Seriously: Evidence from the Climate Policy. Journal of Public Administration Research and Theory, 22, 319-346.

*Ishii, A., & Okubo, A. (2014). Path dependence and paradigm shift: How cetacean scientists learned to develop management procedures that survived the controversial whaling regime. Review of Policy Research, 31, 257-280.

Jilke, S., Van de Walle, S., & Kim, S. (2016). Generating usable knowledge through an experimental approach to public administration. Public Administration Research, 76, 69-72.

*Kahn-Nisser, S. (2015). The hard impact of soft co-ordination: Emulation, learning, and the convergence of collective labour standards in the EU. Journal of European Public Policy, 22, 1512-1530.

*Kamkhaji, J., & Radaelli, C. (2017). Crisis, learning and policy change in the European Union. Journal of European Public Policy, 24, 714-734.

*Kay, A. (2017). Policy failures, policy learning and institutional change: The case of Australian health insurance policy change. Policy & Politics, 45, 87-101.

Kirk, J., & Miller, M. (1986). Reliability and validity in qualitative research. Newbury Park, CA: Sage.

Knill, C. (2005). Introduction: Cross-national policy convergence: Concepts, approaches and explanatory factors. Journal of European Public Policy, 12, 764-774.

*Leifeld, P. (2013). Reconceptualizing major policy change in the advocacy coalition framework: A discourse network analysis of German pension politics. Policy Studies Journal, 41, 169-198.

*Lodge, M., & Matus, K. (2014). Science, badgers, politics: Advocacy coalitions and policy change in bovine tuberculosis policy in Britain. Policy Studies Journal, 42, 367-390.

*Lowry, W. (2006). Potential focusing projects and policy change. Policy Studies Journal, 34, 313-335.

*Lundin, M., Öberg, P., & Josefsson, C. (2015). Learning from success: Are successful governments role models? Public Administration, 93, 733-752.

Maggetti, M., & Gilardi, F. (2016). Problems (and solutions) in the measurement of policy diffusion mechanisms. Journal of Public Policy, 36, 87-107.

Malang, T., Brandenberger, L., & Leifeld, P. (2019). Networks and Social Influence in European Legislative Politics. British Journal of Political Science, 49(4), 1475-1498.

*Marier, P. (2009). The power of institutionalized learning: The uses and practices of commissions to generate policy change. Journal of European Public Policy, 16, 1204-1223.

Marsh, D., & Sharman, J.C. (2009). Policy diffusion and policy transfer. Policy Studies, 30, 269-288.

McNabb, D. E. (2017). Research methods for public administration and nonprofit management (Fourth edition). New York, NY: Routledge.

Meseguer, C. (2004). What role for learning? The diffusion of privatisation in OECD and Latin American countries. Journal of Public Policy, 24, 299-325.

Meseguer, C. (2006). Rational learning and bounded learning in the diffusion of policy innovations. Rationality and Society, 18, 35-66.

Moher, D., A. Liberati, J. Tetzlaff, D. Altman, and Prisma Group (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS medicine 6, e1000097.

*Montpetit, É. (2009). Governance and policy learning in the European Union: A comparison with North America. Journal of European Public Policy, 16, 1185-203.

*Montpetit, É., & Lachapelle, E. (2017). Policy learning, motivated scepticism, and the politics of shale gas development in British Columbia and Quebec. Policy and Society. 36, 195-214.

*Motta, M. (2018). Policy diffusion and directionality: Tracing early adoption of offshore wind policy. Review of Policy Research, 35,398-421.

*Moschella, M. (2011). Lagged learning and the response to equilibrium shock: The global financial crisis and IMF surveillance. Journal of Public Policy, 31, 121-141.

Moyson, S. (2017). Cognition and policy change: The consistency of policy learning in the advocacy coalition framework. Policy and Society, 36, 320-344.

Moyson, S., Fievet, B., Plancq, M., Chailleux, S., & Aubin, D. (2019). Make it loud and simple: Coalition politics and policy framing in the French policy process of hydraulic fracturing. Paper presented at the General Conference of the European Consortium for Political Research. Wroclaw, Poland: University of Wroclaw.

Moyson, S., & Scholten, P. (2018). Theories on policy learning: Existing approaches and future challenges. In N. Dotti (Ed.), Knowledge, policymaking and learning for European cities and regions. From research to practice. Cheltenham, UK: Edward Elgar Publishing.

Moyson, S., Scholten, P., & Weible, C. (2017). Policy learning and policy change: Theorizing their relations from different perspectives. Policy and Society, 36, 161-177.

*Newman, J., & Bird, M. (2017). British Columbia’s fast ferries and Sydney’s Airport Link: Partisan barriers to learning from policy failure. Policy & Politics, 45, 71-85.

*Nohrstedt, D. (2005). External shocks and policy change: Three Mile Island and Swedish nuclear energy policy. Journal of European Public Policy, 12, 1041-1059.

*Nutley, S., Downe, J., Martin, S., & Grace, C. (2012). Policy transfer and convergence within the UK: The case of local government performance improvement regimes. Policy & Politics, 40, 193-209.

*O’Donovan, K. (2017). Policy failure and policy learning: Examining the conditions of learning after disaster. Review of Policy Research, 24, 537-558

*Panke, D. (2010). Small states in the European Union: Structural disadvantages in EU policy-making and counter-strategies. Journal of European Public Policy, 17, 799-817.

*Pattison, A. (2018). Factors shaping policy learning: A study of policy actors in subnational climate and energy issues. Review of Policy Research, 35,535-563.

Radaelli, C. (2005). Diffusion without convergence: how political context shapes the adoption of regulatory impact assessment. Journal of European Public Policy, 12, 924-943.

*Radaelli, C. (2009). Measuring policy learning: Regulatory impact assessment in Europe. Journal of European Public Policy, 16, 1145-1164.

*Resh, W., Siddiki, S., & McConnell, W. (2014). Does the network centrality of government actors matter? Examining the role of government organizations in aquaculture partnerships. Review of. Policy Research, 31, 584-609.

Riche, C. (2019). A network approach to collective learning in environmental policy-making. Paper presented at the Joint Sessions of the European Consortium for Political Research. Mons, Belgium: Université catholique de Louvain.

Riche, C., Aubin, D., & Moyson, S. (2020). Too much of a good thing? A systematic review about the conditions of learning in governance networks. European Policy Analysis, Early View.

*Rietig, K. (2018). The links among contested knowledge, beliefs, and learning in European climate governance: From consensus to conflict in reforming biofuels policy. Policy Studies Journal, 46,137-159.

*Rietig, K., & Perkins, R. (2018). Does learning matter for policy outcomes? The case of integrating climate finance into the EU budget. Journal of European Public Policy, 25, 487-505.

Rose, R. (1991). What is Lesson-Drawing? Journal of Public Policy, 11, 3–30.

Rose, R. (1993). Lesson-drawing in public policy: A guide to learning across time and space. London: Chatham House Publishers.

Sabatier, P. A. (1987). Knowledge, Policy-Oriented Learning, and Policy Change. Knowledge, 8, 649–692.

Sabatier, P.A. (1993). Policy change over a decade or more. In P.A. Sabatier & H.C. Jenkins‐Smith (Eds.), Policy Change and Learning: An Advocacy Coalition Approach. Boulder and Oxford: Westview Press.

Sabatier, P., & Jenkins-Smith, H. (1993). Policy change and learning: an advocacy coalition approach. Boulder, CO: Westview Press.

Sabel, C., & Zeitlin, J. (2008). Learning from difference: The new architecture of experimentalist governance in the EU. European Law Journal, 14, 271-327.

*Schofield, J. (2004). A model of learned implementation. Public Administration, 82, 283-308.

*Scholten, P. (2017). The limitations of policy learning: A constructivist perspective on expertise and policy dynamics in Dutch migrant integration policies. Policy and Society, 36, 345-363.

Shafir, E. (2013). The behavioral foundations of public policy. Princeton, NJ: Princeton University Press.

*Shi, S. (2012). Social policy learning and diffusion in China: The rise of welfare regions. Policy & Politics, 40, 367-385.

Shipan, C. R., & Volden, C. (2008). The Mechanisms of Policy Diffusion. American Journal of Political Science, 52, 840–857.

*Shipan, C.R., & Volden, C. (2014). When the smoke clears: expertise, learning and policy diffusion. Journal of Public Policy, 34, 357-387.

Simmons, B., Dobbin, F., & Garrett, G. (2008). The Global Diffusion of Markets and Democracy. New York, NY: Cambridge University Press.

Simmons, B., & Elkins, Z. (2004). The globalization of liberalization: Policy diffusion in the international political economy. American Political Science Review, 98, 171-189.

*Szarka, J. (2010). Bringing interests back in: Using coalition theories to explain European wind power policies. Journal of European Public Policy, 17, 836-853.

*Tamtik, M. (2016). Institutional change through policy learning: The case of the European Commission and research policy. Review of Policy Research, 33, 5-21.

*Thunus, S., & Schoenaers, F. (2017). How does policy learning occur? The case of Belgian mental health care reforms. Policy and Society, 36, 270-287.

Trumbo, C., Dunwoody, S., & Griffin, R. (1998). Journalists, cognition, and the presentation of an epidemiologic study. Science Communication, 19, 238-265.

*Van de Velde, D., (2013). Learning from the Japanese railways: Experience in the Netherlands. Policy and Society, 32, 143-161.

Van Der Vaart, W., Van Der Zouwen, J., & Dijkstra, W. (1995). Retrospective questions: Data quality, task difficulty, and the use of a checklist. Quality & Quantity, 29, 299-315.

*Van Gerven, M., Vanhercke, B., & Gürocak, S. (2014). Policy learning, aid conditionality or domestic politics? The Europeanization of Dutch and Spanish activation policies through the European Social Fund. Journal of European Public Policy, 21, 509-527.

Visser, M., & Van der Togt, K. (2016). Learning in public sector organizations: A theory of action approach. Public Organization Review, 16, 235-249.

Volden, C., Ting, M., & Carpenter, D. (2008). A formal model of learning and policy diffusion. American Political Science Review, 102, 319-332.

*Voorberg, W., Bekkers, V., Timeus, K., Tonurist, P., & Tummers, L. (2017). Changing public service delivery: Learning in co-creation. Policy and Society, 36, 178-194.

Weiss, C. (1977). Research for policy's sake: The enlightenment function of social research. Policy Analysis, 3, 531-545.

*Witting, A. (2017). Ruling out learning and change? Lessons from urban flood mitigation. Policy and Society, 36, 251-269.

Witting, A., & Moyson, S. (2015). Learning in post-recession framing contests: Changing UK road policy. In N. Schiffino, L. Taskin, C. Donis & J. Raone (Eds.), Organizing after crisis: The challenge of learning. Brussels, Belgium: Peterlang.

*Zito, A. (2009). European agencies as agents of governance and EU learning. Journal of European Public Policy, 16, 1224-1243.

Top of page

Notes

1 In addition, the so-called ‘managerialist’ approaches rely on insights from the political and organizational sciences to examine the development of the intelligence, sophistication and effectiveness of public policies and public administration. In these approaches, however, attention to organizational concerns has increased over time to the detriment of policy concerns, with a focus on organizational learning, inter-organizational learning and the specific role of bridging or boundary organizations in learning processes (Moyson & Scholten, 2018).

2 While research ontology and research epistemology could also be considered methodological issues, they fall beyond the scope of this article

Top of page

List of illustrations

Title Figure 1
Credits Source: The Authors
URL http://journals.openedition.org/irpp/docannexe/image/2083/img-1.png
File image/png, 21k
Title Table 2a: Results of the systematic literature review
Credits Source: The Authors
URL http://journals.openedition.org/irpp/docannexe/image/2083/img-2.png
File image/png, 56k
Title Table 2b: Results of the systematic literature review
Credits Source: the Authors
URL http://journals.openedition.org/irpp/docannexe/image/2083/img-3.png
File image/png, 99k
URL http://journals.openedition.org/irpp/docannexe/image/2083/img-4.png
File image/png, 78k
URL http://journals.openedition.org/irpp/docannexe/image/2083/img-5.png
File image/png, 87k
URL http://journals.openedition.org/irpp/docannexe/image/2083/img-6.png
File image/png, 82k
URL http://journals.openedition.org/irpp/docannexe/image/2083/img-7.png
File image/png, 32k
Top of page

References

Bibliographical reference

Pierre Squevin, David Aubin, Éric Montpetit and Stéphane Moyson, Closer than they look at first glance: A systematic review and a research agenda regarding measurement practices for policy learningInternational Review of Public Policy, 3:2 | 2021, 146-171.

Electronic reference

Pierre Squevin, David Aubin, Éric Montpetit and Stéphane Moyson, Closer than they look at first glance: A systematic review and a research agenda regarding measurement practices for policy learningInternational Review of Public Policy [Online], 3:2 | 2021, Online since 01 July 2021, connection on 30 May 2023. URL: http://journals.openedition.org/irpp/2083; DOI: https://doi.org/10.4000/irpp.2083

Top of page

About the authors

Pierre Squevin

Université Catholique de Louvain (Belgium)
pierre.squevin@uclouvain.be

David Aubin

Université Catholique de Louvain (Belgium)
david.aubin@uclouvain.be

Éric Montpetit

Université de Montréal (Canada)
e.montpetit@umontreal.ca

Stéphane Moyson

Université Catholique de Louvain (Belgium)
stephane.moyson@uclouvain.be

Top of page

Copyright

CC-BY-4.0

Creative Commons - Attribution 4.0 International - CC BY 4.0

https://creativecommons.org/licenses/by/4.0/

Top of page
  • Logo IPPA – International Public Policy Association
  • DOAJ - Directory of Open Access Journals
  • Journal supported by the Institut des Sciences Humaines et Sociales (CNRS)
    CNRS - Institut national des sciences humaines et sociales
  • OpenEdition Journals
Search OpenEdition Search

You will be redirected to OpenEdition Search