1England has a long and chequered history of system evaluation and monitoring that stretches back several decades (Johnson, 2016), with students’ “standards of academic attainment” the principal indicator of education system quality. System monitoring began in the late 1940s with sporadic sample surveys of students’ “reading comprehension”, followed by a relatively short-lived sample-based survey programme launched in the late 1970s that focused on language, mathematics and science (the “APU programme”, conducted under the remit of the government’s Assessment of Performance Unit). The “identification of underachievement” was a principal APU programme objective – but defining “underachievement” proved problematic. Low relative achievement became the programme focus, identified on the basis of student subgroup comparisons (e.g. boys versus girls), with no particular accountability implications.
2Whatever its strengths and weaknesses, the short lifespan of the APU programme was determined as soon as it began, given its designed impotence as a tool for school-level accountability, the issue that consumed the interest of the newly elected Conservative government. And thus it was that with the 1988 Education Reform Act the country’s first national curriculum (for students aged 5 to 16) was hurriedly developed and introduced into England and Wales, along with accompanying statutory assessment arrangements (National Curriculum Assessment, NCA). Northern Ireland followed suit under the Education Reform (Northern Ireland) Order 1989, with some differences in the adopted national curriculum. Scotland, with its independent education system, was not directly affected. Daugherty (1994) offers a fascinating account of the policy landscape throughout this period, with Isaacs (2010) charting the scene as it then evolved over the following two decades.
3Statutory assessment originally took place in grades 2, 6 and 9, at the end of each of three “key stages” in schooling, with two principal purposes: the provision of student-level attainment information, for the benefit of students, teachers and parents/carers; and the provision of school-level attainment data for use both in school self-evaluation and in external school accountability.
4Implementation of the new system was fraught with problems from the start. A principal cause was the unmanageable assessment burden placed on class teachers, who were expected to assess each of their students against each of a large number of “statements of attainment”, and to come to decisions about appropriate “levels” of attainment for every learner in each of a wide range of subjects. Teachers eventually rebelled, and this, along with increasingly voiced concerns about assessment reliability, led to programme reform.
5The political response was to replace the then NCA model with exhaustive national testing at the end of the three key stages, with cut scores adopted for purposes of level classification; testing was complemented by a simplified system for teacher assessment, using “best fit” level descriptors. At this point, “National assessment moved away from teachers’ control and was transformed into written examinations in English, mathematics and science … taken by an entire year group simultaneously” (Isaacs, 2010, p. 323). Further problems ensued, and in 1993, in response to a threatened teacher boycott of the planned testing that year, the government set up a critical review of the curriculum and its assessment (the Dearing Review). Some consequences were that testing was dropped at key stage 1, so that only teacher assessment remained, and while tests continued to feature at key stages 2 and 3, these were shorter than before, and were from then on externally marked (tests had previously been marked by the students’ own class teachers).
6School league tables were produced for the first time in 1996, for key stages 2 and 3, marking the official focus on school-level accountability. NCA results began to be increasingly used to admonish schools for “poor” performance, and even to close schools considered to be “failing” : both practices continue, and indeed are reinforced. In 2002 the NCA was supplemented by the introduction of a teacher-assessed Early Years Foundation Stage (EYFS) profile for children aged three to five, extending the monitored age range downwards even further. Periodic reviews of the curriculum and of related curriculum assessment experience continued, and further changes were made (Isaacs (2010) provides details). Of particular note, testing at key stage 3 was abandoned, and at key stage 2 cohort testing in science was eventually replaced with sample-based testing. In a separate development, a statutory online “phonics check” (assessment of word decoding skills) for all six-year-olds in state-funded schools was introduced in 2012, so that children with poor literacy development might be identified early, and weaknesses addressed.
7In 2014, in response to England’s adequate but not outstanding performance in the international survey programmes, in particular PISA, a new “more demanding” national curriculum was introduced, and from 2016 the assessment of English, mathematics and science has taken a new form. Item response theory (IRT) was adopted as the underlying measurement model. Levels of attainment were abandoned, to be replaced by IRT-based scaled scores, with key stage 1 scores to be used as a prior attainment measure in school comparison analyses at key stage 2. In principle, this “value added” model takes account of the differential intakes of schools, providing more valid comparisons of school effectiveness than otherwise would be the case, using attainment measures assumed to be more dependable than level classifications. Teacher assessment continues to feature, using new performance descriptors, but, as before, carries less importance in school evaluation than test results.
8The primary motivation for NCA remains a desire to “drive up standards”, i.e. to improve student attainment – currently in language (literacy), mathematics (numeracy) and science.1 But to what extent has this ambition been achieved to date? Has student attainment increased nationally ? NCA results should in principle be the obvious indicator here – monitoring attainment has been its purpose after all. But it has failed in this respect, initially because test difficulty over time could not adequately be guaranteed, and latterly because both the curriculum and assessment methodology have been changed. National qualification results cannot usefully be used to address this question either, given their subject-specific nature, optional candidate subject choices, and no satisfactory way of guaranteeing grade comparability over time. An obvious alternative source of evidence must be the international survey programmes. So what do these have to say ? In PIRLS and TIMSS a fluctuating picture of attainment has emerged for England across surveys, which, while not necessarily reflecting reality, has offered politicians and policy-makers occasional opportunities to claim upward movements, however small, as evidence of improving attainment. PISA has proved more problematic in this sense. Here, the picture for England is one of remarkable stability over time in all tested subjects (Jerrim & Shure, 2016) – a stability termed stagnation” by those with a particular political predisposition.
9As noted by McGrane et al. (2017, p. 149–150), reporting on 2016 PIRLS results for England:
… educational systems are complex and it therefore takes time for educational policies to produce large-scale changes to systems and the attainment of pupils within those systems.
10It could even be that should attainment levels improve markedly over time, it might be impossible to identify exactly which, if any, policy initiative(s) to credit for this. And what if national attainment never rises appreciably? When will it be time to ask whether all of the resources currently consumed by annual cohort testing might more usefully be deployed to greater effect elsewhere?