Navigation – Plan du site

AccueilNuméros12-1Poursuite des discussions à propo...The Relevance of Improvement Rese...

Poursuite des discussions à propos du dossier « pratiques fondées sur la preuve, preuves fondées sur la pratique ? »

The Relevance of Improvement Research to Mathematics Education

Paul Cobb
p. 129-132

Entrées d’index

Les débats :

Pratiques et preuves
Haut de page

Texte intégral

1Bryk (2015) gives an overview of an approach for improving schooling and students’ learning on a large scale that integrates improvement science methods adapted from medicine and other fields with networked communities of researchers and practitioners who collaborate to address a common problem. As will become apparent, I see considerable value in the improvement research approach that Bryk proposes but argue that it is important to be clear about the types of problems for which the approach is appropriate. As a background, I should clarify that I have been involved for the last ten years or so in work that seeks to address the issue of what it takes to improve the quality of mathematics teaching and learning on a large scale. The details of this work are not important for the purposes of this commentary, but I will draw on some of our findings in order to clarify several of the points I make.

2As Bryk observes, “[i]t is important to recognize that each of the research methods we now commonly use originated to address a specific problem context” (p. 472). Clearly, this tenet also applies to improvement research. On my reading, Bryk discusses but does not differentiate cleanly between two broad classes of problems. I suggest that the improvement research approach is particularly appropriate for one of these classes of problems. I can clarify the distinction between the two types of problems by focusing on one of the illustrations that Bryk discusses, that of instructional coaching. Instructional coaches are, ideally, accomplished mathematics teachers who are charged with working with teachers in their classroom, and sometimes also with groups of teachers in their schools, to support them in improving the quality of their instruction. As Bryk indicates, instructional coaching has become a relatively common improvement initiative in schools in the United States in recent years. He argues, correctly in my view, that “what coaches actually needed to know and be able to do and the requisite organizational conditions necessary for them to carry out this work were left largely unspecified” (p. 468). This issue here is that of specifying the improvement strategy adequately by identifying accomplished coaching practices that actually support teachers’ learning together with the key aspects of school context that support rather than impede coaches’ enactment of these practices. This is distinct from a second important issue, that of determining how to implement this improvement strategy reliably for different groups of teachers working under varying contextual conditions.

3On my reading, the improvement research approach that Bryk describes originated to address the second type of problem, figuring out how to implement improvement strategies reliably rather the first, identifying and specifying strategies that are worth trying to implement. The reports I have read of improvement research initiatives indicate that Bryk and his colleagues draw on and synthesize the current research literature to identify leverage points for intended improvement and develop potentially revisable improvement strategies on this basis (Bryk, Gomez, Grunow, & LeMahieu, 2015). In contrast, the extant knowledge base in mathematics education, teacher education, and related fields is thin in many areas including mathematics-specific professional learning communities, content-focused instructional coaching, and school instructional leadership (Cobb, Jackson, Henrick, & Smith, in press). As a consequence, our work has focused primarily on the first rather than the second of the above issues and has sought to specify, for example, what coaches actually needed to know and be able to do to support teachers’ learning and the key aspects of school context that support coaches’ development and enactment of these practices (Kane, Cobb, & Gibbons, in press). Although we use a variety of methods ranging from cross case comparative analyses to quantitative analyses employing relatively sophisticated techniques, the tools and methods of improvement science do not seem particularly relevant to the first of the above two issues even with hindsight.

4To this point, I have sought to delineate the type of problems for which the improvement research approach is appropriate. It is also important to note that this approach is quite radical and has far researching implications for research in mathematics education. Bryk argues, convincingly in my view, that processes of implementation should be an explicit focus of investigation (and improvement) in their own right. In making this argument, Bryk echoes both de Certeau’s (de Certeau, 1984) contention that implementation is a necessarily second act of creation and Cohen and Barnes’ (Cohen & Barnes, 1993) observation that that the implementation of any improvement policy or strategy that that does not simply endorse current practice requires learning on the part of those who implement it. Bryk’s proposal is radical in part because it challenges researchers’ standard practice of delegating the implementation of instructional innovations to practitioners.

5Bryk discussed the basic tenets of the improvement research approach and emphasizes that “[u]nderstanding the contours of variation and the likely factors that contribute to it is key to achieving better outcomes more reliably at scale” (p. 471). In the case of mathematics education, the outcomes of interest might concern the quality of students’ mathematical communication and reasoning, of teachers’ instructional practices, of the supports for teachers to improve their classroom practices (e.g., instructional coaching), or of school and system-level instructional leadership in mathematics. Bryk also makes it clear that investigations that aims to understand and improve the quality of implementation processes challenges the common tendency to “implement fast and scale widely” (p. 474). He instead urges us to start small, identify aspects of school and system-level context that influence the quality of implementation and revise the supports for effective implementation accordingly, in the process learning how to implement across a range of different contexts. Findings from the work in which I have been involved indicate that this proposal is well founded. In the context of a high stakes accountability environment, the school and system leaders with who we worked felt a sense of urgency to improve students’ mathematics achievement scores quickly and almost invariably implemented improvement initiatives across a large number of schools. We found that although change occurred, those changes were often not improvements (Jackson, Cobb, Rigby, & Smith, in press). Thus, the school and system leaders frequently invested scarce resources in potentially promising initiatives, their efforts frequently had little if any pay off. In Bryk’s terms, this finding strongly indicates that schools and school systems would make more efficient use of their limited resources (including time) if they started small and learned their way to scale.

6The focus of our ongoing work has shifted during the last two years from identifying potentially productive improvement strategies to partnering with school and system personnel to figure out how to implement some of the identified strategies effectively. In making this transition, we have drawn heavily on the tools and methods of improvement science that Bryk and his colleagues have pioneered in education, and have come to view them as an invaluable resource. In the context of this work, improvement involves reducing variation in the implementation of, for example, instructional coaching by skewing the distribution of coaches’ practices “to the right.” Clearly, we would not be able determine whether a change is an improvement unless we had first specified what high-quality coaching looks like in some detail. In addition, we need a way of assessing the quality of coaching on a regular basis to gauge whether we are making progress, and have come to see practical measures as essential to improvement work.

7Bryk and his colleagues frequently make the point that it is impossible to improve what you cannot see (Yeager, Bryk, Muhich, Hausman, & Morales, 2013). By this they mean that it is impossible to improve the implementation of a strategy unless we can assess how it is actually playing out in practice. As a consequence, we are currently developing practical measures of high-leverage aspects of mathematics instruction and of supports for teachers’ learning, including instructional coaching. As Bryk notes, the purpose of practical measures differs from that of both accountability measures and research measures in that they are explicitly designed to inform practitioners’ efforts to improve their practices, be they teachers, coaches, school leaders, or system leaders. Practical measures can be demanding to create because they have to yield valid data, and the data have to be quick and easy to collect and analyze the data so they do not disrupt practitioners’ work. Only then can practitioners use the measures repeatedly to assess their progress and act with confidence on the resulting evidence. In the course of our work, we have been struck by the dearth of available measures that can inform efforts to improve the quality of mathematics learning and teaching at scale and see this as an area where work is urgently needed.

8Earlier, I noted that Bryk’s argument for improvement research has far reaching implications. Perhaps the most radical aspect his proposal is that it requires us to rethink the traditional relationship between research and practice. It requires that researchers address and, ideally, identify with the problems that practitioners encounter in the course of their work, and that they address these problems by doing research with rather than on practitioners. This in turn requires that researchers strive to develop relationships with practitioners that are grounded in trust, take schools’ current improvement goals and strategies as a primary point of reference, and are sensitive to schools’ and school systems’ current capacities and constraints. Thus, Bryk’s proposal requires that researchers not merely change but improve their practices if they are to contribute to improvements in the learning and teaching of mathematics and if their work is to be at the service of practice.

Haut de page

Bibliographie

Bryk, A. S. (2015). Accelerating how we learn to improve. Educational Researcher, 44, 467-477.

Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve. Cambridge, MA: Harvard Education Press.

Cobb, P., Jackson, K., Henrick, E., & Smith, M. S. (in press). Putting the pieces together. In P. Cobb, K. Jackson, E. Henrick, & M. S. Smith (Eds.), Systems for instructional improvement: Creating coherence from the classroom to the district office. Cambridge, MA: Harvard Education Press.

Cohen, D. K., & Barnes, C. A. (1993). Pedagogy and policy. In D. K. Cohen, M. W. McLaughlin, & J. E. Talber (Eds.), Teaching for understanding: Chal-lenges for policy and practice. San Francisc: Jossey Bass.

de Certeau, M. (1984). The practice of everyday life. Berkeley: University of California Press.

Jackson, K., Cobb, P., Rigby, J. G., & Smith, M. S. (in press). District instructional leadership. In P. Cobb, K. Jackson, E. Henrick, & M. S. Smith (Eds.), Systems for instructional improvement: Creating coherence from the classroom to the district office. Cambridge, MA: Harvard Education Press.

Kane, B. D., Cobb, P., & Gibbons, L. (in press). Instructional coaching. In P. Cobb, K. Jackson, E. Henrick, & M. S. Smith (Eds.), Systems for instructional improvement: Creating coherence from the classroom to the district office. Cambridge, MA: Harvard Education Press.

Yeager, D., Bryk, A. S., Muhich, J., Hausman, H., & Morales, L. (2013). Practical measurement. Stanford, CA: Carnegie Foundation for the Advancement of Teaching.

Haut de page

Pour citer cet article

Référence papier

Paul Cobb, « The Relevance of Improvement Research to Mathematics Education », Éducation et didactique, 12-1 | 2018, 129-132.

Référence électronique

Paul Cobb, « The Relevance of Improvement Research to Mathematics Education », Éducation et didactique [En ligne], 12-1 | 2018, mis en ligne le 20 août 2018, consulté le 01 décembre 2020. URL : http://journals.openedition.org/educationdidactique/3123 ; DOI : https://doi.org/10.4000/educationdidactique.3123

Haut de page

Auteur

Paul Cobb

Vanderbilt University

Articles du même auteur

Haut de page

Droits d’auteur

Tous droits réservés

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search