1The measurement of inflation is one of the most obvious areas of application in response to the question “Is quality measurable and how?”1 Indeed, inflation measurement is based on the principle of identifying a “pure price”, in the sense that the index produced is free of quantity and quality noise. It is therefore up to the price statistics division of Offices for National Statistics to carry out statistical processing and, consequently, to measure, directly or indirectly, variations in the quality of goods and services consumed. Methodologies are marked by global harmonization processes (Schmelzer, 2015 in the case of growth and GDP, for example), with standardization operating, in the case of inflation measurement, via the ILO and the IMF at international level, and Eurostat at European level (StatCan, 2015).
2It is in this context of harmonizing practices that the emergence of the Ottawa Group should be understood. It was set up to provide inflation measurement specialists “a forum for specialists and practitioners who work for, or are advisors to, national statistical agencies or international organizations to exchange their experiences and thoughts on crucial problems of measuring price change”.2 As the group is essentially made up of “specialists and practitioners who work in the statistical services of different countries or international organizations, or who act as advisors to these services” (ibid.), its work has acted and continues to act as a decisive operator in the construction, firstly of a statistical practice, but also, in the process, of a harmonized way of thinking. An initial assessment of the first ten years of the Ottawa Group’s work was provided by Diewert (2007). This evaluation can be considered internal, as it was authored by a key figure within the Ottawa Group, a point we will explore further in this article.
3This article sets out three objectives. After introducing our methodology (Section 1), we begin by reviewing the contextual reasons that led to the creation of the Ottawa Group (Section 2). We show that, initially informal, this group served as a forum for exchanging ideas on several methodological challenges (index-number formula choice, quality adjustment and treatment of item replacement, harmonization and comparability across countries to name a few). We also emphasized that the Ottawa Group has been the crucible for the production of international textbooks.
4We then propose an analysis of the main contributors to the Ottawa Group since 1994 (Section 3). This will be based on the database we have built up over a long period, which includes contributions from participants who have met 17 times over the past 30 years: Who are the contributors to this group? What works do they rely on to defend their theses? These treatments provide an opportunity to demonstrate, despite a plurality of methodological traditions, a gradual methodological convergence. The final section (Section 4) focuses on the statistical treatment of quality. How is quality treated in the contributions? Which authors and theories are most frequently used by Ottawa Group contributors, to justify their views on quality? More generally, the question arises as to whether the way in which the problems of quality adjustment is characterized “have paralleled” over this 1994-2024 period “the ways in which they have conceptualized the price index itself’” (Banzhaf, 2001, 345). On this issue of quality, we show, on the one hand, that in the 1990s, coexisted two conflicting conventions around CPI measurements. On the other hand, we note that several major authors (Diewert, 2007; 2024) recognize a persistent failure to define quality satisfactorily.
5Our approach combines quantitative and qualitative methods. First, we built up a specific database from the 646 contributions to the Ottawa Group over the past 30 years. As the majority of the documents (papers and presentations with slides) were not published in indexed journals, we downloaded them from the Ottawa Group website, eliminated duplicates and slides presentations. Artificial intelligence tools were employed to extract core metadata from the contributions, focusing primarily on titles, author names, abstracts, keywords, and cited references. The extracted data were then systematically reviewed and validated by the authors. A database of n = 488 contributions was thus compiled (see Table 1). This enabled us to carry out a bibliometric analysis (Section 3) and an exploratory analysis of article content (Section 4).
Table 1. Descriptive Analysis: Main Information Regarding the Collection
|
Description
|
|
|
Number of meetings
|
17
|
|
Time span of meetings covered
|
1994-2022
|
|
Total documents downloaded
|
646
|
|
Documents removed (slides, presentations)
|
157
|
|
Final number of documents
|
488
|
6The Ottawa Group contributions present a heuristic character, largely due to their heterogeneous and often informal format. Unlike standardized academic articles, these documents vary significantly in length, from brief notes of 3 pages to more elaborate texts exceeding 30 pages. In the early years, many contributions appear to reflect quasi-oral presentations rather than fully developed written papers, with limited use of abstracts, keywords, or formal structure. Notably, some papers were discussed during meetings even in the absence of their authors, further illustrating the flexible and informal nature of early Ottawa Group proceedings. This variability imposes clear limitations on bibliometric analysis, which we address by analyzing contributions titles and complementing our quantitative methods with qualitative content analysis to capture the richness and diversity of the material. One of the strengths of this less formalized style is that it allowed authors to adopt a more open and exploratory approach. This often led to the expression of diverse perspectives and the open discussion of competing views and controversies surrounding inflation measurement methods. Over time, however, a gradual shift toward a more academic format became apparent. Since the late 2000s, many contributions have increasingly adopted conventional academic features, including the use of keywords, extended bibliographies, and structured formats incorporating literature reviews, methodological sections, empirical results, and conclusions. Given the impracticality of reading all 488 contributions produced over three decades, our analysis of the statistical treatment of quality focuses on a subset of 85 documents that explicitly reference quality either directly or through methodological terms such as “hedonic”. These documents were processed using NVivo, and classified based on their coverage rate relative to the full text content. This qualitative dimension provides a complementary perspective for examining how quality measurement has been addressed within the Ottawa Group.
- 3 These email exchanges took place between August 19 and 22, 2024.
- 4 These email exchanges took place between October 9 and 15, 2024.
- 5 The exchanges of emails with both these protagonists took place between September 7 and October 10, (...)
- 6 On August 24, 2025.
7On the qualitative side, our analysis focused on both written sources and interviews. Specifically, we conducted five interviews with key informants to help reconstruct the history of the Ottawa Group. Initially, we conducted four interviews with protagonists, statisticians, or economists who founded the Ottawa Group, or at least were present from its inception. The first was carried out with Erwin Diewert. At his request, it was undertaken by e-mail exchanges, using a qualitative questionnaire grid.3 This process was generalized with three other key actors: a high-ranking European and then international official (requesting full anonymity) who knew well the history of the Ottawa Group;4 Louis-Marc Ducharme, a statistician at the Canadian national accounting system and the StatCan price division in the early 2000s; and another former Canadian statistician who witnessed the early development of the Ottawa Group.5 A fifth interview was subsequently conducted with Bert Balk of Statistics Netherlands, a key founder of the Ottawa Group. Selected by the editors of Œconomia as a referee on a first version of our article, Bert Balk provided us with a number of fruitful comments on the history of the Ottawa Group. We felt useful to add him to our corpus of interviews. With his agreement and that of Œconomia, an interview was conducted using the same questionnaire as for the other actors.6
8In a text from June 2024, produced on the occasion of the Ottawa Group’s 30th anniversary, the economist Diewert (2024) offers an internal analysis of the Ottawa Group’s history, coupled with a form of ego-history. To our knowledge, no more external article has yet been published on the history of the Ottawa Group. The reconstruction of such a history has been carried out, for the purpose of this article, from materials of different natures. Diewert’s written contribution was supplemented by a text by Valentina Stoevska (2018), ILO Department of Statistics (2018) on a historical perspective of the role played by the ILO in national consumer price index. The interviews conducted with our five key actors were an invaluable resource for reconstructing this social history.
9Statistical outputs are shaped by both national and international conventions, but they are also subject to tensions—particularly when analytical choices, parameter selection, and broader methodological decisions are influenced by the institutional context. These include national statistical offices, international organizations, and various formal or informal expert groups that help establish the methodological standards followed by statistical agencies (for a historical perspective in the U.S., see Banzhaf, 2001).
- 7 As stated later by Banzhaf for instance: “the term “cost of living” did not imply a link to welfare (...)
10In the field of inflation measurement, the ILO played a leading role for many years, particularly in the early efforts to harmonize methodological practices. As Louis-Marc Ducharme notes, “Historically, the mandate for the CPI was given to the ILO.” (Ducharme, interview) While several countries began developing proto-price indices as early as the 1910s, the International Conference of Labor Statisticians (ICLS) in Geneva issued its first recommendations on comparability and harmonization as early as 1923 (Stoevska, 2018). The early presence of the ILO can be explained by the importance of the economic stakes involved, embedded in the measurement of inflation, and the “comparison of the levels of real wages in different countries.”7 As early as 1925, the ICLS called for prices to be collected for a basket of goods of constant quality, that is to say “pricing of same qualities overtime.” (Stoevska, 2018)
11In 1947, the ICLS issued recommendations on what aspects of inflation should be measured, whether to construct a cost-of-living index or a retail price index (ibid., 5). However, there was still little discussion at the time concerning the statistical treatment of quality and its variations. The concept of a “pure price index” was mentioned, defined as “the measure of the change in the cost of purchasing a specific set, or basket, of consumer goods and services”.
Invitations were sent out to all the members of the ILO. The meetings were typically attended by those in charge of the compilation of consumer price indices at the various national institutes (usually separate statistical offices, sometimes banks or government departments). The agendas of the meetings covered mostly practical issues of implementation, inspired by the latest ILO regulation (1962) and working towards an updated version (1987). (Balk, interview)
12The biennial meeting on consumer price indices eventually led to the publication of the first ILO Manual on CPIs in 1989. Authored by Ralph Turvey during his tenure as chief statistician at the ILO, this manual was published under the ILO’s auspices (Turvey, 1989).
- 8 Bert Balk was one of the founders of the GO. He attended fourteen meetings (1-11, 13-15) and (co-)p (...)
13This historical context highlights a longstanding disconnect between academic research and statistical practice in the field of price measurement. As mentioned by Bert Balk,8
Interestingly, though the report leading up to the 1962 ILO regulation contained an extensive bibliography (15 pages) with almost all of the academic publications on consumer price measurement and related methodological issues known to that date, there was in practice hardly any contact between statistical agencies and academia. (Balk, interview)
14The 1989 ILO manual also illustrates this gap, explicitly dismissing much of the academic literature as irrelevant to operational needs. As noted in its preface, the manual prioritized practical procedures over theoretical insights, reflecting a broader institutional perception that academic contributions were too abstract, inaccessible or “irrelevant” to inform day-to-day statistical operations. Illustrative is the following quote from the first page of this manual (1989).
The manual deals with the practice of consumer price index numbers and does not attempt to survey the academic literature on the subject. Much of that extensive and fascinating literature is irrelevant for the purposes of this manual. One reason is that no compiler of a consumer price index, whether it be monthly or quarterly, can hope to obtain new weights more than once a year at the most, and the data used to compute new weights always refer to the past rather than to the present, whereas much of the literature deals with other types of index. In any case, there is no point in providing references to publications that are available only in large specialized libraries … Those readers who feel the need for a more academic treatment should turn to the late Professor R. G. D. Allen’s book Index numbers in theory and practice (London, Macmillan, 1975) which, however, does not profess to deal with the operational aspects of weighting, collecting and computing. (Turvey, 1989, 1, emphasis added).
- 9 In 1998, IWGPI was composed of: Economic Commission for Europe, International Labor Office, Interna (...)
- 10 Minutes of the first meeting of the Technical Expert Group for updating the manual on CPI (TEG-CPI) (...)
15Following this joint-meeting, an Inter-Secretariat Working Group on Price Indices (IWGPI) was formed, with the aim of regularly updating the Manual.9 The minutes of the first meeting read as follows: “the chairman explained the role the TEG-CPI will play in updating of the Manual on CPI and presented the terms of reference of the TEG [Technical Expert Group]”.10
- 11 For a succinct and institutional presentation of these city groups, refer to UNSD (https://unstats. (...)
16The Ottawa Group emerged in the broader context of the creation of “city groups”, informal expert groups initiated by national statistical agencies, independent of international organizations.11 These groups were established to facilitate methodological exchange on statistical topics where international coordination had waned, often due to limited institutional capacity or interest. As one of our Statistics Canada interlocutors emphasized, the city groups were not UN creations but rather informal networks formed by national agencies to address emerging needs in areas such as services, prices, and environmental statistics. This dynamic was partly a response to what some participants described as the decline of methodological research at international institutions, following reductions in dedicated resources during the 1980s.
17The first of these was the Round Table on Business Registers in 1986 (later known as the Wiesbaden Group), followed by the Voorburg Group on service statistics in 1987 (founded by the statistical offices of Canada, France, the Netherlands, and possibly the UK and Australia) and the London Group on Environmental Accounting in 1993.
18Established in 1994, the Ottawa Group was the fourth in this lineage. It was not created by the United Nations, nor was its original purpose to update the ILO CPI Manual. As Bert Balk explains in an interview: “The Ottawa Group can be considered as an offspring of, but not “driven by”, the UNECE/ILO Group of Experts on Consumer Price Indices”.
19Rather, its formation was sparked by dissatisfaction with the limited methodological depth of the UNECE/ILO meetings. In the aftermath of the Stigler report, the US Bureau of Labor Statistics (BLS) established a division for price and index number research, staffed by several individuals who actively participated in academic conferences in the US. Elsewhere, only a handful of statistical agencies employed researchers with similar interests, for instance, in Canada, the Netherlands, Sweden, and the United Kingdom. According to Balk, this scarcity of research within statistical agencies was
caused by a disinterest in the implications of academic research. This was mostly considered arcane and not very relevant for statistical practice. But academic interest was also very limited. Price measurement and index number theory were marginal topics, if present at all, at academic conferences such as Econometric Society meetings. (Balk, interview)
20As recounted by several key participants, the idea for the Ottawa Group took shape informally: after the 1993 UNECE/ILO meeting, dissatisfied by the level of discussions, three delegates (Paul Armknecht then of the US BLS, Bert Balk of Statistics Netherlands, and Bohdan Schultz of Statistics Canada)
sat after closure of the 1993 meeting together “over a beer” …, and contemplated the idea of having a separate international platform for all those actively involved in methodological research on and/or related to (consumer) price measurement, not only from official agencies but also from academia. Schultz promised to discuss this idea with his superiors at Statistics Canada, who reacted positively and forwarded the idea to the UN Statistical Commission. (Balk, interview)
- 12 “There are not many academics who specialize in economic measurement problems because Universities (...)
21This gave birth to the Ottawa Group as a city group under the auspices of the UN Statistical Commission. Although this explanation is not universally accepted, Diewert emphasized also the resource constraints faced by National Statistics Offices, the need to build international networks of expertise, and the challenges of gaining academic recognition for work on economic measurement issues.12 As he explained,
National Statistical Offices usually have only a few employees who are able to do research on the various difficult measurement problems that arise when producing economic statistics. The idea behind the creation of these city groups was to give these somewhat isolated researchers in these agencies a chance to discuss difficult measurement problems with researchers from other countries. (Diewert, interview)
22In essence, city groups were conceived as platforms to pool expertise and support both international and national statistical systems in addressing complex methodological and conceptual challenges. While these groups emerged from informal initiatives, the development of questions surrounding price indices was also shaped by the institutional dynamics of international organizations. For example, the Inter-Secretariat Working Group on Price Statistics (IWGPS) has continued to organize biannual meetings on price index issues, contributing to the ongoing exchange of ideas among experts, and reinforcing the broader ecosystem within which city groups like the Ottawa Group operate.
The first meeting of the Ottawa Group was held in November 1994 at Statistics Canada in Ottawa. Invitations were sent to individuals and official agencies, the latter with explicit suggestions to dispatch their specialized researchers if any. The underlying idea was that conference participants could speak à titre personnel, as if on an academic conference. (Balk, interview)
23The forum was designed to foster open, individual contributions rather than formal institutional positions, and while theoretical issues were welcomed, no official recommendations were issued; instead, the proceedings were published to document the discussions. It brought together 23 participants, including 7 from the host agency, to exchange ideas on key challenges in price measurement, with a focus on applied research. As recalled by Balk, the initial terms of reference were formulated as follows.
The purpose of this conference is to bring together, in a forum, specialists from different countries to exchange ideas on crucial problems of measuring price change. The conference will focus on reasons of inadequacies in price indices, particularly in consumer price indices, and on methods used to reduce them. Without avoiding theoretical issues, the emphasis will be on applied research. … There will be no formal recommendations at the end [of a meeting], but the proceedings will be published, including the presented papers (or their summaries) and the main points of discussion. (Balk, interview)
24This very first meeting “featured only two topics, the first being “Price indices at the micro aggregation level and their macro effects” comprising 8 papers and 2 room documents. The second topic was “Bias of a Consumer Price Index, shall we try to estimate it?” The discussion on the second topic was moderated by Jack Triplett, and facilitated by a background article published shortly before the meeting (Wynne and Sigalla, 1994).” (Balk, interview)
25From the outset, the Ottawa Group positioned itself as a forum for exchanging ideas on methodological challenges in price index construction. Contributions to the Ottawa Group have increasingly addressed the statistical treatment of “quality adjustment”, an issue that remains central to the measurement of “pure” price change. But the Ottawa Group was also set up at a very special time in the history of inflation measurement. The early 1990s saw a heated political and methodological debate in the United States around the possible overestimation of inflation. This culminated in the creation of the Boskin Commission in 1995, commissioned by Federal Reserve Chairman Alan Greenspan. Although the Boskin Commission was a distinctly American initiative tied to domestic budget policy, its framing of measurement bias, particularly in relation to quality adjustment, resonated internationally.
26As mentioned by Balk,
Though the US Boskin Commission didn’t exist yet, there had emerged, under the influence of a number of academic publications, some international rumor about possible bias of the CPIs and other macroeconomic key statistics. Measurement bias of the CPI was dreaded especially in those countries where automatic uprating procedures for salaries, pensions, etcetera were in force. This was why we felt it important to include a discussion of this topic in the very first meeting of the Ottawa Group. (Balk, interview)
27Indeed, the Commission’s report (Boskin et al., 1996) argued that the U.S. CPI overstated inflation by 0.5 to 1.5 percentage points annually. Greenspan told the Budget Committees of Congress in January 1995 that
[T]he official CPI may currently be overstating the increase in the true cost of living by perhaps 1/2 percent to 1-1/2 percent per year. ... If the annual inflation adjustments to indexed programs and taxes were reduced by 1 percentage point ... the annual level of the deficit will be lower by about $ 55 billion after five years. (Greenspan, quoted in Moulton, 1996, 2).
- 13 It can be noted (by the way) that there is always a question of quality improvement and never again (...)
28Alan Greenspan urged the Senate Finance Committee to hold hearings on the Consumer Price Index and appointed a commission to investigate potential bias (ibid.). He stated that “A more difficult, but no less important, issue concerns making adequate adjustment for the improvement in the quality of goods and services overtime” (Greenspan, 1995, 3). He also emphasized the core challenge of defining with precision what is “a unit of output”, noting that “Disentangling price change from quality improvement presents a formidable challenge.” (ibid.)13
29Although debates on inflation measurement had existed since the 1950s (Stapleford, 2011), they became more politically charged in the 1990s, as identifying CPI overestimation was seen as a means to curb “public overspending,” given its role in indexing pensions and social benefits. One interlocutor noted that “given the political implications that were intended,” this debate posed a risk to the independence of statistical agencies raising concern in countries like “Australia, Canada, the USA, France, the Netherlands and the UK.” In this context, a former Statistics Canada official recalled that “rather than wait for the creation of an American solution, it was proposed to take this opportunity to debate the subject of biases between price scientists”.
30According to Ducharme, one of the positive externalities of the Boskin episode was that it brought national authorities’ focus back to the importance of price measurement, leading to increased resources and methodological improvements at institutions like the U.S. Bureau of Labor Statistics (BLS) and, to a lesser extent, Statistics Canada.
- 14 Lequiller (1997), from the French Office for National Statistics for instance, Wynne (1994) from th (...)
31But the “Boskin moment” also served to objectify some of the controversy, as the report was far from unanimously endorsed at the time for its claims of inflation overestimation. Some authors express doubts about the results of the Boskin report.14 Dissenting voices are thus being heard, particularly on the question of a possible link between quality adjustment and inflation overestimation. In a contribution to the 1997 Ottawa Group meeting, entitled The Harmonisation of Quality Adjustment Practices in the European Union, Don Sellwood, then a Eurostat statistician, challenged key assumptions underlying the Boskin framework and warned against overly simplistic interpretations of cross-country CPI differences based on quality adjustment practices.
“a) The propositions that NSIs and independent researchers have failed to determine whether or not CPIs are understated or overstated on account of their treatment of quality changes;
b) That international comparisons of price changes in those items where quality is changing most offer the prospect of determining the extent of which CPIs may be under or over stated on account of quality changes;
c) That it is possible from within CPI systems to detect bias on account of quality adjustment;
d) That it is appropriate to tackle the “problem of quality adjustment” in CPIs by a process of elimination, prohibiting practices that, on the basis of “expert” consensus, at least acceptable. (Sellwood, 1995, 4)
32Thus, while the “Boskin moment” did not directly influence the founding of the Ottawa Group, it did help underscore the relevance of the issues the Ottawa Group had already begun to tackle, especially the complex relationship between theoretical frameworks, practical measurement issues, and the political uses of inflation statistics.
- 15 The IWGPS was not a converted mode of the Ottawa Group, as suggested on page xxi of the CPI Manual (...)
- 16 “The Ottawa Group also acted as cradle of the CPI Manual and PPI Manual, which cover both theory an (...)
33After a number of meetings, Ottawa Group participants came to believe that the group should evolve beyond being merely a forum for discussion. Here arose the idea of working on an up-to-date version of the 1989 CPI Manual written by Ralph Turvey (ILO), covering both theory and practice. Collaboration with various international organizations was sought and a coordinating body, and as mentioned above, the Inter-secretariat Working Group on Price Statistics (IWGPS), was created.15 Many regular contributors of the Ottawa Group were involved in drafting these chapters, and the manuals’ development became a recurring item on the Ottawa Group meetings’ agenda. In this way, under the aegis of the IWGPS the Consumer Price Index Manual (2004), and later the Producer Price Index Manual (also 2004), and the Export and Import Price Index Manual (2009) were published. The progress of the project was a recurrent topic on the agenda of the Ottawa Group meetings. Thus, in this sense the Ottawa Group acted as “cradle” of the manuals16 (Balk, 2008, 39).
34Although the city groups were initially established outside the formal structures of international organizations, their work was later embraced by various United Nations agencies—a move described by Ducharme (interview) as “an important strategic decision”. For the city groups, this recognition offered greater visibility and institutional support, as well as a custodian for their emerging best practices. For UN agencies, it aligned with their mandate to promote and disseminate methodological standards. This convergence—what Ducharme (interview) called “a happy marriage”—enabled the work of these informal expert networks to be more broadly and rapidly disseminated, reinforcing their influence within the global framework of statistical harmonization.
35The bibliometric analysis that follows aims to verify and enrich the historical reconstruction developed earlier. It seeks to capture how the Ottawa Group’s informal beginnings progressively evolved into more structured forms of participation after 2000. By examining co-authorship structures, co-citation patterns, and thematic clusters, this quantitative approach aims to uncover dynamics that remain implicit in documentary sources alone, offering an additional lens on the Ottawa Group’s conceptual and methodological development since 1994.
36Between 1994 and 2022, the Ottawa Group convened 17 meetings in different countries, typically on a biennial basis, resulting in the contribution of more than 450 papers from practitioners and scholars. To support our bibliometric and thematic analyses, we constructed a dedicated database based on the documents available on UNECE website.17 Initially, 646 PDF files were downloaded. After excluding 157 files consisting of slide decks or duplicate materials, we retained a final corpus of 488 papers spanning from the inaugural meeting in Ottawa (1994) to the most recent in Rome (2022). Some of the earlier contributions were available only as scanned image files. To process these documents, we employed Python-based tools—including Pdf2image and Pytesseract, for optical character recognition (OCR). The extracted texts were then restructured and archived in PDF format using the OS and Reportlab libraries. This pre-processing phase was essential in preparing the dataset for structured metadata extraction and subsequent analysis.
- 18 Python codes and AI prompts used in this process are available upon request.
37To prepare the database for bibliometric analysis, essential information was extracted from each document, including file name, paper title, authors, year, meeting location, author keywords, abstract, and references. This extraction process was supported by artificial intelligence tools with each extracted row verified individually by the authors to ensure accuracy.18 An overview of the Metadata extracted from the Ottawa Group Contributions is presented in Table 2.
Table 2. Overview of Extracted Metadata from the Ottawa Group Contributions
|
File Name
|
Title
|
Authors
|
Year
|
Place
|
Author Keywords
|
Abstract
|
References
|
|
f139.pdf
|
Direct and Indirect Time Dummy Approaches to Hedonic Price Measurement
|
de Haan J.
|
2003
|
Paris
|
consumer price index; hedonic regression; quality adjustment; sampling
|
Quality-adjusted price indexes are frequently obtained by estimating how much of the price difference between a disappearing item and its replacement is due to a quality difference. Hedonic regression has become a popular quality-adjustment method among statistical agencies. The use of the time dummy, though still limited, is increasing. This paper has two aims. First, it shows how hedonic methods fit into the wider methodology of agencies applying a geometric mean index formula at the elementary aggregate level. Second, the paper argues that the ordinary 'direct' time dummy approach cannot cope with systematic price effects of new and disappearing products. Several indirect alternatives are discussed in which the time dummy coefficients serve as a common adjustment factor and in which systematic effects of adjustment are taken into account. Special attention is paid to the role of the sampling design, in particular to product sampling proportional to expenditure.
|
Balk, B.M. (2003); Berndt, E.R.; Griliches, Z.; Rapoport, N.J. (1995); Dalén, J. (2001); Diewert, W.E. (2003); Goldberger, A.S. (1968); Griliches, Z. (1971); de Haan, J. (1998); de Haan, J. (1999); Hausman, J. (2003); Mulligen, P.H. van (2002); Rosen, S. (1974); Triplett, J.E. (2001)
|
Source: The authors.
38The number of contributions to the Ottawa Group meetings showed an increasing trend over the covered period, with significant peaks observed in 2011 and 2013 (see Figure 1). These peaks potentially reflect periods of intensified research activity or heightened interest in consumer price index topics. Contributing factors may include the emergence of new data sources such as scanner and web-scraped data, the demand for more timely and granular statistics in the aftermath of the global financial crisis, and the increasing recognition of the Ottawa Group as a central forum for international collaboration on inflation measurement.
Figure 1. Total Number of Contributions to the Ottawa Group per Year
Source: The authors, based on 488 contributions downloaded from the Ottawa Group dedicated page on the UNECE website.
39Overall, approximately 57 % of the contributions lack author-provided keywords, and 19 % do not include end-of-text references (see Figure 2). This issue is especially pronounced in the earlier meetings, where informal formats, often resembling quasi-oral presentations, led to missing or inconsistent metadata. The proportion of contributions with structured references and keywords increased significantly after 2000, reflecting a gradual shift toward a more formal academic ethos within the Ottawa Group. Nonetheless, the persistent heterogeneity in document structure and style posed substantial challenges for conventional bibliometric analysis, which typically depends on standardized elements such as keywords, titles, and abstracts. Such heterogeneity in document structure and style presents clear limitations for conventional bibliometric analysis usually based on keywords, titles and abstract analyses. To address this, a mixed-method approach that combines quantitative bibliometrics with qualitative content analysis was adopted. Full-text processing through NVivo allowed us to identify recurring themes, co-occurrences, and conceptual patterns beyond what could be captured through metadata alone. This approach helped mitigate the limitations of inconsistent formatting and enhanced the depth of our analysis.
Figure 2. Proportion of Contributions with Missing Keywords and References
Source: The authors, based on 488 contributions downloaded from the Ottawa Group dedicated page on the UNECE website.
40The compiled database consists of 488 contributions to the Ottawa Group meetings, encompassing 6,178 references cited across these works. The dataset also includes 612 unique keywords provided by the authors and identifies 466 distinct authors. Of the 488 documents, 151 were single authored. Furthermore, the average number of co-authors per document is 1.77 (see Table 3), indicating the level of collaboration within the Ottawa Group and serving as an indicator of research teams. This average continuously increased overtime. Let us recall that the Ottawa Group is not a fixed or formally constituted body, but rather a dynamic forum characterized by fluctuating participation over time. Changes in institutional roles, professional trajectories, retirements, and shifting research priorities have all contributed to variations in the authorship and thematic focus of Ottawa Group contributions. This evolving nature could be reflected in the diversity of topics covered across different meetings and in the appearance of new contributors in later years. Rather than viewing this instability as a limitation, we interpret it as a marker of the Group’s openness and responsiveness to emerging methodological challenges. Our thematic analysis captures these shifts, highlighting how specific themes gained prominence at different stages of the Ottawa Group’s development.
Table 3. Evolution of the Average of Co-Authors per Contribution
|
Co-authors per document (1994-2022)
|
1.77
|
|
Co-authors per document (1994-2000)
|
1.40
|
|
Co-authors per document (2000-2010)
|
1.57
|
|
Co-authors per document (2011-2022)
|
2.06
|
Source: The authors, based on the 488 contributions.
41In this section, advanced bibliometric techniques using Biblioshiny under R Studio are employed to assess the most prolific contributors to the Ottawa Group, evaluating their contributions and collaboration networks. While the Ottawa Group has no formal membership, its history reflects a rotating set of authors and participants, with varying levels of involvement across meetings. We recognize that co-authorship is an imperfect proxy for collaboration, particularly given that some papers were presented by individuals other than the listed authors, or submitted without the authors being present. Nevertheless, co-authorship patterns offer insight into the structure of methodological exchange among contributors. Complementing this, a thematic analysis is used to classify and map key research topics, distinguishing central and peripheral themes across four typologies. We also visualize the thematic evolution over time, focusing on shifts in centrality and density.
- 19 A brief biography of the main authors is presented in Appendix 1.
- 20 Fractionalized contributions are calculated by dividing the credit for a publication equally among (...)
42The analysis highlights key contributors to the Ottawa Group’s research on inflation measurement, with Diewert leading with 25 articles, followed by Mick Silver with 19, and Jan De Haan with 15 contributions (see Table 4).19 When accounting for fractionalized20 contributions, which consider co-authorship, Diewert remains the most influential with a fractionalized count of 16.33. Balk and Jörgen Dalén also show notable fractionalized contributions, reflecting their substantial individual input. Three of the listed authors (Diewert, De Haan, and David Fenwick) are also in the steering committee 2023 of the Ottawa Group.
Table 4. Top Ten Authors in Terms of Ottawa Group Contributions
|
Authors
|
Articles
|
Articles Fractionalized
|
|
DIEWERT W E
|
25
|
16.33
|
|
SILVER M
|
19
|
12.92
|
|
DE HAAN J
|
15
|
9.50
|
|
BALK BM
|
13
|
10.67
|
|
DALÉN J
|
12
|
11.50
|
|
GUDNASON R
|
12
|
9.83
|
|
SHIMIZU C
|
12
|
4.42
|
|
KRSINICH F
|
11
|
8.50
|
|
FENWICK D
|
10
|
5.42
|
|
WATANABE T
|
10
|
3.75
|
Source: The authors, based on the 488 contributions.
43Concerning the contributions over time, the first six authors show consistent contributions across the entire time span from 1994 to 2022, indicating their sustained involvement in the Group’s activities. However, the peaks in activity vary: Dalén and Rósmundur Guðnason’s had most of their contributions in the late 1990s and early 2000s. In contrast, authors like Chihiro Shimizu, Frances Kršinić, and Tsutomu Watanabe have become more prominent in recent years, suggesting their emergence as valuable contributors in the field of inflation measurement and price indices.
44The analysis of fractionalized articles across 1994-2000, 2001-2010, and 2011-2022 highlights the evolving collaboration patterns among prolific authors in the Ottawa Group (see Figure 3). In 1994-2000, Dalén, Turvey, and Guðnason were key contributors but with relatively low levels of co-authorship suggesting individualized research efforts or importance. The 2001-2010 period saw increased collaboration, with Silver, Diewert, and Fenwick becoming central figures. In the most recent phase, 2011-2022, contributors such as Diewert, Krsinich, Shimizu, and Lamboray illustrate a more networked research environment, marked by higher co-authorship levels and stronger cross-national collaboration.
Figure 3. Top Authors’ Contributions vs. Fractionalized Contributions by Period
Source: The authors, based on the 488 contributions.
45These temporal shifts underscore the evolving nature of the Ottawa Group, not only in terms of topics and data sources, but also in the composition and collaboration patterns of its contributors. They reinforce the Group’s role as a dynamic, informal platform that adapts to new challenges in inflation measurement. As we will explore further below, these collaboration patterns also reflect deeper epistemological orientations: while Dalén is closely aligned with Sellwood’s pragmatic-statistical perspective, Krsinich, Shimizu, and Lamboray are more closely connected to Diewert’s theoretical approach.
46The collaboration network represents a visual mapping of scientific collaborations, where nodes represent individual authors and links indicate co-authorship. Studying joint publications using network analysis helps identifying degrees of cooperation within the Ottawa Group.
- 21 Betweenness measures how often a node appears on the shortest paths between other nodes. In other w (...)
47The network analysis identifies Diewert as the author with the highest betweenness centrality, emphasizing his crucial role as a key connector who facilitates collaboration across various research communities.21 De Haan, Silver and Balk also demonstrate significant betweenness scores, further reinforcing their importance in bridging gaps between otherwise disconnected groups. Mehrhoff, Fixler, Armknecht, Shimizu, Finkel, and Moulton also rank among the top ten authors in terms of betweenness centrality, indicating their importance in facilitating cross-collaboration within the Group.
- 22 Closeness centrality measures how close an author is to all other authors in the network, based on (...)
48The evolution of scientific collaboration within the Ottawa Group over the three periods (1994-2000, 2001-2010, and 2011-2022) can be traced through the changing patterns of betweenness and closeness centrality among the top authors.22
49During the first period (1994-2000), the network was relatively fragmented, with most authors exhibiting low betweenness centrality, indicating limited connectivity between different clusters (see Figure 4). Key authors like Moulton, Haworth, Silver and Sellwood began to emerge with modest centrality, but the structure was indicative of isolated knowledge production. Many authors showed high closeness levels, suggesting that within their small groups, they were central, facilitating efficient communication among a tightly knit circle of collaborators.
Figure 4. The Collaboration Networks (1994-2000)
Source: The authors, based on the 488 contributions.
Figure 5. The Collaboration Networks (2001-2010)
Source: The authors, based on the 488 contributions.
Figure 6. The Collaboration Networks (2011-2022)
Source: The authors, based on the 488 contributions.
50The period 2001-2010 (see Figure 5) saw improved network connectivity, with authors like Silver, Fenwick, Diewert, Finkel and Ball becoming more central. Closeness centrality during this period remained high for several authors, such as Gudnason and Jonsdottir, Schimuzu and Watanabe, and other isolated clusters, indicating a central role in small, yet efficient collaborative networks. By 2011-2022 (Figure 6), the collaboration network became more structured and interconnected, with Shimizu, Diewert, Rambaldi, Scholz, Balk and others playing key intermediary roles.
51This evolution of the network towards a more interconnected and cohesive structure reflects the Ottawa Group’s role in fostering methodological convergence and interdisciplinary dialogue. This transformation reflects not only the emergence of new contributors in response to evolving statistical challenges, but also the sustained involvement of long-standing participants, highlighting the Group’s capacity to balance continuity with renewal. Crucially, the absence of formal membership and the diversity of collaboration patterns illustrate the Ottawa Group’s open and adaptive nature. Its informal structure has enabled statisticians, economists, and technical experts from a wide range of institutions and countries to co-develop approaches to shared problems in inflation measurement and price index methodology.
52The shift from loosely connected interactions in the 1990s to a more interconnected network after 2000 reflects the gradual institutionalization of the Ottawa Group. What began as an informal exchange among a small set of statistical offices progressively acquired structure, with the emergence of central contributors, most notably Diewert, and repeated references (see 3.2.3) to a stable group of foundational authors. This contributed to the consolidation of a shared conceptual frame that increasingly guided discussions. Although multiple methodological approaches remain in use across countries, the conceptual orientation of the field has progressively converged toward a unified school of thought shaped by this core group.
- 23 A co-citation of two articles occurs when both are cited in a third article.
53While the collaboration networks describe how contributors interact, the co-citation23 analysis examines whether they draw on similar intellectual foundations. This approach enables us to determine whether the structural convergence observed after 2000 corresponds to a shared conceptual orientation, or whether distinct epistemic lineages continue to coexist within the Group.
54The network of the Ottawa Gtroup’s contributions, as illustrated in Figure 7, reveals a densely connected and complex structure, allowing the identification of influential papers and the tracing of research topic evolution. The centrality indicators, including betweenness, closeness, highlight the prominent positions of papers such as Diewert (1976), Fisher (1922), Diewert (2003), Rosen (1974), Walsh (1901), Griliches (1971), Diewert (2002), and Court (1939). Diewert (1976) stands out with the highest betweenness, emphasizing his critical role as a bridge between different authors and clusters within the network.
Figure 7. The Co-Citations Network of Ottawa Group
Source: The authors, based on the 488 contributions.
55The co-citation network with a minimum of six edges highlights Diewert’s 1976 paper as the most influential, serving as a key bridge between research clusters. Prominent papers by Summers (1973) and Fisher (1922) also show significant centrality, contributing to the network’s dense structure and reflecting their enduring influence on index number theory and international price comparisons. Notable works by Rosen (1974), Balk (1998), and Ivancic, Diewert, and Fox (2011) further enrich the network, particularly in the areas of hedonic pricing, index decomposition, and scanner data methodologies. Interestingly, Walsh’s 1901 work, despite its age, continues to exert historical influence through his axiomatic approach to index number theory. His work influenced generations of index number theorists including Fisher, who applied Walsh’s principles in early empirical contexts, and later Diewert, who formalized and extended them through modern superlative and economic approaches.
56Examining co-occurrences from full texts using Nvivo allows us to identify distinct epistemic communities. For instance, while Balk, De Haan, and Silver are frequently cited alongside Diewert, Dalén and Turvey appear more often in conjunction with Sellwood. This highlights two distinct epistemic clusters: one focusing on theoretical and econometric approaches (Diewert, De Haan, and Silver), and another grounded in pragmatic statistical practices (Sellwood, Dalén, and Turvey). The low overlap between these clusters underscores the coexistence of divergent methodological traditions within the Ottawa Group.
Table 5. Co-Occurrences of Selected Authors (Row Percentages)
Source: The authors, based on the 488 contributions using Nvivo matrix coding.
57Over time, however, the pragmatic cluster gradually recedes, especially after 2000, leaving the theoretical approach increasingly dominant within the Ottawa Group’s conceptual landscape.
58After examining the connections between authors and key publications, we now shift to exploring the interconnections between research topics through co-occurrence and thematic analysis. The co-occurrence network of author keywords reveals significant linkages. “Scanner data” emerges as the most central keyword followed by “quality adjustment” and “hedonic regression”. These themes often intersect with broader topics such as “system of national accounts” and “price dispersion,” indicating their integration into both methodological and policy-related discussions.
59Due to the absence of keywords in a significant number of contributions within the Ottawa Group, we complemented this analysis with bigram extraction from titles and abstracts. Concerning titles, the bigram “scanner data” shows the highest betweenness and closeness centrality, highlighting its critical role in linking various research themes. Other notable connections include “quality adjustment", “house price” and “hedonic regression” reflecting the Ottawa Group’s ongoing engagement with complex measurement challenges. The network of contributions abstracts bigrams show approximately the same connections as titles.
Figure 8. Co-Occurrence Network of Titles Bigrams
Source: The authors, based on the Titles bigrams.
60The analysis of bigrams in titles from 1994 to 2000 (Figure 8) shows that “quality adjustment” emerges as an influential theme, reflecting the focus on adjusting for quality changes in economic measurements. Other significant bigrams include “European Union,” indicating a possible relationship with harmonization, and “quality change”, further emphasizing the focus on quality-related issues in economic indices. “Scanner data” begins to surface, hinting at an emerging interest in new forms of data collection.
61During the period from 2001 to 2010, “quality adjustment” remains a significant focus, indicating ongoing efforts to refine economic measures. “House price” and “scanner data” reflect the growing importance of real estate markets and digital data collection in economic analysis. Bigrams such as “time dummy” and “hedonic regression” signal a growing reliance on econometric techniques, while references to “digital cameras” and “mobile telephony” illustrate attention to specific rapidly evolving product categories.
62Between 2011 and 2022, “Scanner data” becomes the most central term, reflecting its growing importance in data collection and analysis. “Quality adjustment” and “house price” also remain central. Emerging topics like “web scraping,” “geospatial data,” and “administrative data” point to the integration of new data sources and technologies in CPI calculations. Additionally, the frequent appearance of terms related to housing and real estate, such as “property price” and “residential property”, highlights the sustained interest in real estate markets during this period.
63While a qualitative analysis of Ottawa Group meeting agendas can provide a general sense of evolving themes, the bibliometric and thematic analyses offer a more systematic and replicable means of tracing conceptual shifts over three decades. This will help uncover patterns and thematic transitions that might not be immediately visible through manual inspection. Rather than replacing qualitative insights, this approach complements them, offering structured evidence of how key topics have risen, interacted, or receded in the Group’s collective output over time.
- 24 The analysis includes 1000 words and identifies 4 labels. The minimum cluster frequency is set at 3 (...)
64Based on analyzing the abstracts bigrams, the thematic analysis categorizes themes into four quadrants based on their relevance (centrality) and development (density).24 This approach simplifies word co-occurrence networks to highlight key and emerging themes. According to Callon et al. (1991): “it is only by making wild over-simplifications and looking at very large clusters that anyone can talk about specialties, fields or research themes that are stable over time” to identify what changes and what transforms. The analysis distinguishes central and peripheral topics by assessing both centrality, which indicates a topic’s importance in the network, and density, which reflects the strength of connections within a cluster. Each cluster represents a distinct center of interest or research topic. A graphical representation further enables the classification of themes into four categories, as described by Cahlik (2000), depending on their position in the quadrants:
-
Motor themes, located in the upper-right quadrant, are characterized by both high centrality and high density, indicating that they are well-developed and play a crucial role in the research field;
-
Basic and transversal themes, located in the lower-right quadrant, are important transversal topics with high centrality but are less developed due to their low density;
-
Emerging or declining themes, positioned in the lower-left quadrant, display low centrality and low density, reflecting their marginal and weakly developed status within the field;
-
Niche themes, in the upper-left quadrant, are marked by high density and low centrality, indicating that they are well-developed internally but remain isolated and of limited significance to the broader research area.
65During the period 1994-2000, the thematic landscape reveals several distinct categories (Figure 9). Motor themes like “quality change”, “Boskin commission” (see above, Section 1) and to a lesser extent, “scanner data,” were both well-developed and central, indicating their critical role in the field. Basic themes, including “quality adjustment” and “hedonic regression” were essential but less developed, serving as foundational yet more transversal topics. Niche themes, like “producer price”, “retail price” and “chain index” were well-developed but of narrower relevance, drawing attention primarily within specific subfields. Meanwhile, emerging or declining themes, such as “consumer durables” and “European Union” were both underdeveloped and less relevant, suggesting they were either nascent fields still gaining traction or areas of diminishing focus within the research community.
Figure 9. Thematic Analysis of the Period 1994-2000
Source : The authors, based on the Abstracts bigrams.
66In the period 2001-2010 (Figure 10), the thematic analysis highlights a strong focus on “housing-related issues”, “quality adjustment”, and “hedonic regression”. Basic themes included foundational topics like “price change” and “labor statistics”. The Emerging or Declining Themes quadrant features clusters such as “scanner data”, “Laspeyres type”, and “alternative approaches”, which saw a decrease in centrality, especially “scanner data”, due to the rise in housing-related inflation analysis driven by real-world economic events.
67During the period 2011-2022, housing markets and property pricing remained highly relevant and well-developed, continuing their prominence from the previous period alongside “hedonic models”. “Quality adjustment” keeps a high level of centrality but with a lower density moving back to basic themes. “Scanner data” and “chain drift” regained centrality, underscoring their growing significance. Niche themes during this period included specialized topics like “price statistics”, “national accounts”, and “Laspeyres type”.
Figure 10. Trends in Key Concepts from Thematic Analysis across the Three Periods
Source: The authors, based on the Abstracts bigrams.
68All in all, the thematic analysis thus captures long-term shifts that would be difficult to detect through qualitative readings alone, offering structured insights into how the Ottawa Group’s research agenda has adapted to technological and economic transformations. Taken together, the evolution of these themes reflects the Ottawa Group’s increasing engagement with real-world data sources, sector-specific inflation challenges, and the diffusion of advanced modeling techniques. Scanner data has progressed from a niche interest to a central concern, mirroring its methodological importance in modern CPI construction. Similarly, hedonic models have transitioned from a foundational tool to a motor theme, particularly in real estate price analysis. By contrast, quality adjustment, while still central, has seen lower density in recent years, suggesting either that its conceptual importance may now be diffused across more specific applications, or else it suggests that there is still great difficulty in addressing this issue.
69On the issue of statistical quality adjustment, Erwin W. Diewert (2007) highlighted its persistent complexity since the formation of the Ottawa Group, noting ongoing difficulties in formulating definitive methodological recommendations, particularly regarding hedonic regression techniques, whose theoretical foundations had not stabilized at the time. He expressed hope that future developments would lead to more standardized approaches.
The topic of quality adjustment has been with us since the formation of the Ottawa Group and it is still with us. We have chapters in the CPI and PPI Manuals on methods of quality adjustment and the use of hedonic methods but somehow, we still seem to have trouble making definite recommendations for specific situations. In addition, the theory surrounding the use of hedonic regressions has still not stabilized. Hopefully, in future years, the methodology in this area will become more routine. (Diewert, 2007, 5)
70Despite strong and persistent methodological uncertainty highlighted by Diewert, the statistical treatment of quality has been regularly addressed. How do the contributions approach the notion of quality? Which authors and theoretical frameworks are most often mobilized by Ottawa Group contributors to support their understanding of quality?
- 25 List of synonyms removed: cpi, consumer price index, price index, consumer price indexes, price ind (...)
71Across the Ottawa Group corpus, “quality adjustment” consistently emerges as a conceptual anchor. Among the top ten keywords in the Ottawa Group contributions, “quality adjustment” emerges as a key focus, alongside “hedonic regression” and “scanner data,” each with 30 occurrences.25 More broadly, 48 quality-related keywords appear, spanning four categories: quality adjustment (35 occurrences), quality change (6), quality assurance and management (5) and data quality (2).
- 26 Two of them concern data quality, not the statistical treatment of quality.
72In document titles, “quality” appears in 5126 out of 488 titles, representing 10.5 % of the contributions, slightly exceeding terms like “scanner” (48) and “hedonic” (41). The most prominent bigram in titles associated with “quality” is “quality adjustment,” including variations like “quality adjustments” and “quality-adjusted price,” with 30 occurrences. This pattern reinforces the central role of quality in the Ottawa Group’s agenda, particularly in relation to pricing methodology.
73Abstract bigram analysis (Table 6) confirms this prominence: “quality adjustment” appears 162 times across various forms. Additionally, the “quality change and improvement” category appears 37 times. Other notable bigrams include “quality characteristics and measures” with 28 occurrences, and “Hedonic Methods” with 9. These results suggest that while “quality” may not always feature prominently in keywords or titles, it remains a core thematic concern within the full-text content, pointing to its increasingly transversal nature in inflation measurement discourse.
Table 6. Frequency of Key Bigrams related to “Quality” in Abstracts
|
Category
|
Keywords
|
Occurrences
|
Category
|
Keywords
|
Occurrences
|
|
Quality Adjustment (162 occurrences)
|
Quality adjustment(s)
|
119
|
Quality Change and Improvement (37)
|
Quality change
|
22
|
|
Quality adjusted(ing)
|
20
|
Quality improvement
|
5
|
|
Quality-adjusted price
|
8
|
Quality improvements
|
4
|
|
Non-revisable quality-adjusted
|
2
|
Quality change adjustment
|
1
|
|
Constructing quality-adjusted
|
1
|
Quality growth
|
2
|
|
Quality-adjusted consumer
|
1
|
Quality deterioration
|
1
|
|
Quality-adjusted house
|
1
|
Improving quality
|
1
|
|
Quality-adjusted indexes
|
1
|
Quality-change bias
|
1
|
|
Quality-adjusted products
|
1
|
Quality Characteristics and Measures (28)
|
Quality differences
|
9
|
|
Quality-adjusted unit
|
1
|
Explicit quality
|
8
|
|
Quality-adjustment factor
|
1
|
Constant quality
|
6
|
|
Quality-adjustment method
|
1
|
Quality characteristics
|
2
|
|
Quality-adjustment standardization
|
1
|
Quality measures
|
2
|
|
Quality-adjustments estimate
|
1
|
Quality features
|
1
|
|
Popular quality-adjustment
|
1
|
Quality modifications
|
1
|
|
Typical quality-adjusted
|
1
|
Hedonic Methods (9)
|
Hedonic quality
|
8
|
|
Weighted quality-adjusted
|
1
|
Hedonic-method quality
|
1
|
Source: The authors, based on the Abstracts bigrams.
74The frequency over time of the top three bigrams from author contributions from 1994 to 2022, shows that “Quality adjustment” appears earlier than “Scanner data” and “Hedonic models”. Quality shows a steep increase in frequency until 2004, then a stagnation of quality in keywords and titles between 2004 and 2011. This stagnation is not observed in abstracts, which suggests that the notion of quality adjustment continued to permeate research discourse and can thus be considered a transversal theme, in line with the previous thematic analysis.
75The analysis of the frequency over time for the top three bigrams in author contributions from 1994 to 2022 shows that “Quality adjustment” emerges earlier than “Scanner data” and “Hedonic models” (Figure 11). It experiences a sharp increase in frequency up until 2004. However, both the keywords and titles occurrences indicate a stagnation of “Quality” between 2004 and 2011. This stagnation is not observed in the abstracts, suggesting that while “Quality” may not have been as frequently highlighted in titles or keywords during this period, it remained a significant underlying concept within the research. This observation aligns with the earlier thematic analysis, indicating the transversal nature of this concept.
Figure 11. Evolution of the Top Three Bigrams in Abstracts (Cumulative Frequencies)
Source: The authors, based on the Abstracts bigrams.
76The co-occurrence network derived from full-text analysis using NVivo (Figure 12) further confirms this. Terms like “hedonic,” “quality adjustment,” and “scanner data” form the conceptual backbone of the network, with strong links to central authors such as Diewert, Silver, and De Haan. A secondary cluster, centered on Sellwood, Dalén, and Lowe, is more loosely connected and it associates “quality” with terms like “harmonization,” reflecting an alternative, more institutionally grounded approach. This structure mirrors the epistemic divide identified earlier, showing that quality is both a site of technical convergence and of conceptual tension.
Figure 12. Network Visualization with Proportional Node Sizes and Column-Normalized Edge Thickness
Source: The authors, based on the 488 contributions using Nvivo matrix coding and Python network analysis tools for visualization.
77We carried out a literature review of the 85 Ottawa Group contributions incorporating the word “quality” or the associated methodological words (essentially the term “hedonic”): 61.2 % of the papers exploring the question were published between 1994 and 2001. In 2004, the year in which the theme was most discussed, 45 % of papers addressed the issue of quality. Since then, no more than 15 % of annual contributions have explicitly addressed questions of quality and the methods used to measure its variations.
Figure 13. Evolution of the Number of Ottawa Group Papers Treating the Topic “Quality” + “Hedonic”
Source: The authors, based on the titles of the 488 contributions.
78Our analysis reveals that questions of quality measurement are important (4.2.1), that they are closely linked to conceptual issues (4.2.2), and that there are two opposing conventions when it comes to the statistical treatment of quality in inflation measurement (4.2.3).
79The subject of quality has been on the table since the beginning of the Ottawa Group’s work (1994). Highly present in the Boskin et al. report (1996), it is still considered in the 2000s, by some authors, as the major topic of the Ottawa Group’s work: “Today most experts agree that the treatment of quality changes poses the most important problem for price statistics” (Hoffmann, 1999). This question was addressed again at the end of our observation period, and in similar terms, for example by Menz et al. (2022). Given the variety of problems posed by inflation measurement, Menz thus emphasizes the question of quality adjustment and the fact that “inflation will be overestimated if rising prices are not adjusted for improved product quality or if products of different quality are taken as close substitutes”.
80This recognition of the importance of quality treatment in measuring inflation is, however, at odds with the low proportion of goods and services that have been genuinely affected by these quality adjustments so far as can be seen from Table 7 (Menz et al., 2022).
Table 7. Quality-Adjusted Product Groups in the HICP
|
Country
|
Products
|
|
Austria
|
Clothing and footwear, recreation and culture (books, DVDs, CDs), telecommunication, durable goods and cars.
|
|
Belgium
|
Cars, video games, CDs, DVDs, books, clothing and footwear.
|
|
Cyprus
|
Electronics, cars.
|
|
Estonia
|
Cars, mobile phones, clothing and footwear, restaurants and cafés, package holidays.
|
|
Finland
|
Cars.
|
|
France
|
Durable goods, clothes, cars, newspapers, books.
|
|
Germany
|
Clothing and footwear, technical products, books, CDs, downloads, computer games, software, cars, electronics, residential property.
|
|
Greece
|
No information available.
|
|
Ireland
|
Clothing and footwear, cars, electronics, CDs, DVDs.
|
|
Italy
|
Clothing and footwear, processed or fresh food, electronics, DVDs, fuels, cars.
|
|
Latvia
|
Cars, electronics, fruit, vegetables, clothing and footwear, books.
|
|
Lithuania
|
Food and beverages, clothing and footwear, furnishings, household equipment, cars, electronics, books.
|
|
Luxembourg
|
Cars.
|
|
Malta
|
Cars, laptops, mobile phones, cameras, clothing and footwear, books, recording media, computer games.
|
|
Netherlands
|
Clothing and footwear, tobacco, cars, electronics, boats.
|
|
Portugal
|
Cars, clothing and footwear, mobile phones.
|
|
Slovakia
|
Package holidays, cars, clothing and footwear, books, CDs, computer games.
|
|
Slovenia
|
Electronics, household appliances, cars, clothing and footwear, books, DVDs, computer games, medicines, audio-video equipment, PCs.
|
|
Spain
|
Cars, food, medicines, personal care, fresh food, clothing and footwear, furniture, household appliances, restaurants.
|
Source: Mentz et al. 2022
81The papers in our corpus highlight that the most pressing challenges are the difficulty of defining quality (Astin and Sellwood, 1997), the difficulty of developing methods accurate to estimate “true price” changes, and first papers introduce the concepts of “trading down” and “trading up” (Hoffmann, 1999). In this first period, it is mainly Don Sellwood (1995; 1997; 1999) who initiates the biggest challenges. He challenges for instance the traditional reliance on utility theory in CPI construction. He argues for a more pragmatic and statistically grounded approach based on observable transactions. He proposes a need for more rigorous conventions and criteria for distinguishing quality change from price change and he emphasizes the importance of stratification and matching techniques. In 2001, he completed his statement by noting that existing theoretical frameworks for quality adjustment are insufficient, and he advocates for a more practical, observation-based approach, grounded in sampling theory. He introduced a distinction between static and dynamic universe of transactions, and he calls for empirical research using scanner data to examine the effects of different sampling and quality adjustment practices.
- 27 “’Quality’ is a word used over and over again in everyday life in all kind of contexts. It seems se (...)
82The pluralism of expert approaches to inflation measurement in the 1980s and 1990s helps to shed light on the emergence of the Ottawa Group. The expert groups were drawn from different epistemic communities over these decades. W. E. Diewert emphasizes the contrast in the contributions of the first session of 1994 between two types of experts: those from North America on the one hand who are committed to what he calls “the economic approach”. The latter consists in basing methodological choices on the microeconomic theory of the consumer and utilitarianism. This approach notably advocates the idea of price and fictitious consumers (Sellwood, 2001).27 This perspective is based on the so-called Triplett axiom according to which: “The economic concept of consumption drives reasoning about consumer price index number issues”.
- 28 Sellwood, an advocate of this approach, states in his 2001 article: “The way forward requires conce (...)
83On the other hand, those who favor a statistical approach and Lowe-type indices (the Laspeyres, Paasche, Fisher family etc.) are keen to make real observations and to use sampling theory.28 Don Sellwood can be seen as a voice for this approach. He will propose eight contributions to the Ottawa Group between the years 1994 and 2001: 1994 (2 papers) 1997 (3 papers), 1998, 1999, and 2001.
84Sellwood, like Dalén (1998, to which he refers) oppose this “Triplet’s axiom”, considering it as “simply a rhetorical ploy”. It is worth noting that Sellwood is directly confronted with the questions of harmonization of methods because of his very central position at Eurostat during this period and the project to develop a HICP for European policy (see Box; see also Astin, 2021).
Box – The statistical treatments of quality as a major source of non-comparability of CPIs between countries?
The impact of the statistical treatment of quality is differentiated according to studies and papers. The most interesting paper from this point of view is that by Astin and Sellwood (1997) because it reports on two studies conducted by Eurostat to try to estimate how far the treatment of quality variations could differ from one euro zone country to another. The exercise was to ask the Offices for National Statistics to calculate, for a range of goods, the difference between the price index used in calculating the CPI and the price that would have been used under the assumption of unchanged quality. According to Sellwood, the difference between unit values on one side and the corresponding price index should result from the quality adjustment procedure, giving “an indication of the extent of the adjustments”. This work led to the result that statistical treatment due to quality variation is “well above what most observers might expect”.
- 29 Which confirms our point that the work presented to the Ottawa Group can have a very exploratory pe (...)
The author quickly adds that these works will not be published:29 “These studies were necessarily crude and for this reason and because of the risk of inappropriate inferences being drawn from them there are no plans to publish them”. He insists that the results indicate many things: “They, nevertheless, suggest that it is safe to assert that quality adjustment practices are a major source of non-comparability between the CPIs of the European Union”.
85The defense of its pragmatic position can be understood in this context. According to him, Eurostat and its statisticians have, at least until the early 2000s, adopted a pragmatic approach. Sellwood rejects “the introduction of fictitious consumers and fictitious prices as serving mainly to confuse the practical issues”. He adds, marking here his frontal opposition to Diewert: “Readers are invited to set aside theories involving imaginary worlds and look at the facts of what index compilers actually do and the practical options for improving CPI construction” (Sellwood, 2001). The concept [of quality] certainly drives certain reasoning but it does not justify that reasoning and it does not drive practical reasoning” (Sellwood, 2001). He will always consider that in the field of statistical treatment of quality, only the different methods and their impacts on the level of inflation should be explored, with a certain degree of trial and error, pragmatism and uncertainty.
CPIs are not defined by broad concepts or principles but by the practices that have evolved to resolve the many measurement problems faced in their construction, in particular sampling and quality adjustment practices. Economic theories of inflation measurement have contributed little to the actual practical decision processes of index construction and practitioners have failed to develop more than rudimentary conceptual frameworks. As a result, a variety of practices are followed without warrant and without any assessment of their effect. (Sellwood, 1998, 1)
86His plea (Sellwood, 1995) for the establishment of a database aimed at mastering all the adjustment methods carried out by the Eurostat countries can be taken as a constructivist epistemology in the sense of Alain Desrosières. This French historian of statistics was a critic of approaches to product or method harmonization, and recognized that most International Organizations referred to harmonization of methods through convergence of processes. Desrosières recognizes that when there is harmonization of methods, “the entire chain of the production process is assumed to be “harmonized”. In doing so, we move away from a realistic epistemology, where the object and its statistics preexist to measurement operations. The two harmonization procedures have the same aim, to create a common measuring space, but their theoretical justifications and approaches are very different » (Desrosières, 2008, 206). As early as 1997, Sellwood laid the foundations for such a pragmatic approach, from a constructivist perspective: “In short, the approach to quality adjustment in the European Union is not to continue the search for ‘the algorithm’ which will replace existing algorithms but to follow the heuristic of eliminating practices that are, by common agreement, wrong”. He added,
The approach has therefore changed to one of seeking to reduce the diversity of practice by first eliminating that which can be agreed as the poorest of practice. Further, to treat differentially those areas where quality changes are thought to be important from those where it is regarded as unimportant or impossible to quantify. This may be equivalent to saying "We admit that we do not know precisely what quality change is or how to allow for it but we believe we know enough about where it occurs to be able to define sufficiently objective procedures to constrain NSIs to treat it in similar or less dissimilar ways than is currently the case. (Sellwood, 1995, 2)
87The database envisioned by Sellwood was never fully prioritized, to the extent that this issue remains on the European Central Bank’s agenda as of 2021 (Menz et al., 2022).
88Behind these opposing positions (theoretical vs. statistical) lie conceptual issues. If, as in the above treatments of the corpus as a whole, most of the texts deal with methodological issues relating to “quality variations” or “quality adjustment”, they all refer, most often implicitly, through the demonstrative choices, to the centrality of the market in neoclassical theory, as Hoffmann clearly illustrates in his contribution to the Ottawa Group of 1999.
It is true that economists expect price differences to reflect differences in quality in a long-run market equilibrium under the condition of perfect competition ; thus, by and large, the methods employed by the Federal Statistical Office yield an undistorted estimate of pure price change, assuming a long-run market equilibrium. (Hoffmann, 1999, 5)
89In a way, the analysis of the statistical treatment of quality in the measurement of inflation is an entirely heuristic place for empirical verification of the degree of operationality of neoclassical theory. Here again, Sellwood (1997) explains the underlying assumptions of neoclassical theory. According to him,
The economic theory of rational consumer behavior is Subjective Expected Utility Theory. This makes four basic assumptions about decision makers:
a) That they have a clearly defined utility function allowing a cardinal number to be assigned as an index of preference for each of a range of future outcomes.
b) That they have an exhaustive view of possible alternative strategies open to them.
c) That they can create a consistent joint probability distribution of scenarios for the future associated with each strategy.
d) That they choose between alternative strategies in order to maximize their subjective expected utility. (Sellwood, 1997, 5)
90On the basis of this demanding axiomatic, he continues,
It seems, on this evidence “reasonable” to conclude that consumer behavior is far from the normative model of rationality suggested by Subjective Expected Utility Theory. The theory of utility is certainly elegant but claims that the “non-observable” is observed indirectly through functions the properties or form of which are not themselves observed (or observable) but are derived from assumptions or axioms, which however plausible, are rather difficult to accept without empirical support. To postulate the existence of a concept such as utility for the purpose of mathematical conjecture is surely rational. To proceed from there to draw conclusions about human rationality on the basis of that existence seem only to confirm its non-existence in practice. (Sellwood, 1997, 7)
91According to him again, the consequences for quality measurements are significant.
The failure arises at the conceptual level, the concept of rationality does not represent actual behavior and without a “rational” decision process, it is difficult to see by what mechanisms the “market” could be said to reflect consumers ‘valuation of the quality of a product or a product characteristic. If consumer preferences are not observable, then presumably producers and sellers set prices according to their own prejudices. However, I come to raise Caesar not to bury him. In response to a similar argument against the “cost of living” concept at the Ottawa meeting, Jack Triplett rightly posed the question “What is the alternative? (Sellwood, 1997, 8)
- 30 This is not unrelated to hedonic method testing, which often focuses on products considered to be f (...)
92In an attempt to break out of the axiomatic rut identified by Sellwood, several papers acknowledge the need to test methodologies for measuring quality in stable markets, for example “excluding energy”, or whose variations can be modelled, as in the case of seasonal consumptions such as “clothing” (Eiglsperger, 2011), or “fashion” (Guédès, 2007). As still admitted by Conflitti et al. in their 2022 contribution to the Ottawa Group, for statistical processing to be meaningful, stable markets are required. Generally speaking, while the papers dealing with quality analyze national practices (Austria, Italy, Canada, Germany, France, Netherlands, Switzerland, European Union etc.) in almost 40 % of cases, the contributions focus on measuring the inflation of specific products, using micro-data. These goods, which are the focus of much quality adjustment and hedonic work, include “television”, “washing machines”, “cars”, etc.30
- 31 The term “pure price” is cited with varying frequency across Ottawa Group sessions each year, rangi (...)
93In this theoretical dynamic, many papers in the corpus studied refer to the “pure price”,31 considering that the measurement of inflation should, ideally, follow the evolution of this pure price, free from the noise of quality (see introduction). This notion of a ‘pure price’ suggests that all that remains in the goods traded is a quantitative relationship, a sort of “hidden magnitude” (Orléan, 2014) that has only to be rid of various noises or random fluctuations; it is in this sense that differences of a qualitative nature have been neutralized in a pure price. The pure price’s theoretical environment is that of a competitive price regime that disregards monopolistic forms of competition and the existence of regulated prices (tariffs), administered prices (agreements or understandings, cartels) and prices subject to high levels of speculation, whose share is increasing. It also takes little account of the differentiated pricing practices embodied in the notion of yield management, that ultra-flexible pricing strategy that aims to maximize operators’ margins by personalizing (or individualizing) prices and which has become established through close surveillance of consumers’ willingness to pay (Jany-Catrice, 2018; 2021).
94Is the opposition between the two conventions still relevant? Several indications suggest that the statistic/pragmatic convention is in decline or has even disappeared.
95First, as we saw in Section 3, from 2001 onwards, Sellwood’s contributions stopped, and no author seemed to take over, at least in the Ottawa Group, this analytical perspective. Second, E. Diewert suggests that this opposition, which existed in the 1990s, has gradually been reduced, at least within the Ottawa Group. He provides an internalist explanation for this.
- 32 In his 1978 Econometrica article, he showed “these three indexes approximated each other to the sec (...)
The main bilateral index number formula that are in use today are the Laspeyres, Paasche, Fisher, Walsh and Tornqvist. There are four main approaches to bilateral index number theory: (i) Fixed Basket Approaches like the Lowe Index; (ii) The Axiomatic or Test Approach; (iii) Stochastic Approaches and (iv) the Economic Approach. Two equally valid basket approach indexes are the Laspeyres and Paasche indexes but they give different answers to the measurement of inflation problem. … If we take a geometric average of these two indexes, we get the Fisher index which satisfies the Time Reversal Test, so that inflation is measured the same whether we run time forward (the usual case) or backward. Thus, the Fisher index can be viewed as a “best” index from the viewpoint of the Fixed Basket Approach. The Walsh index can also be viewed as a “best” Fixed Basket type index. The Fisher index satisfies more “reasonable” tests than any competitor so it can also be viewed as a “best” index from the viewpoint of the Test Approach. The Tornqvist index can be viewed as a “best” index from the viewpoint of the Stochastic Approach to index number theory. Finally, in my 1976 Journal of Econometrics paper, I showed that the Fisher, Walsh and Tornqvist were equally “best” indexes from the viewpoint of the Economic Approach to index number theory. Thus, the four different approaches to bilateral index number theory lead to the same three index number formula as being “best” for the various approaches.32 Thus, for most practical purposes, it did not matter which index was chosen as best: empirically, each of the three best indexes tended to give the same answer. It took a long time for this result to diffuse to statistical agencies but it eventually did. Thus, price statisticians eventually realized that it was pointless to argue about which approach was the right one if each major approach gave the same answer in practice (Diewert, 1987) (Diewert, interview).
96In essence, Balk responds in a similar way, noting that although a distinction can be drawn between “The stochastic approach, which starts by laying out some more or less appropriate statistical model,” and “The economic approach, which departs from all those unverified/unverifiable neo-classical beliefs,” he emphasizes that “My own work over all those years might be seen as an example that all these approaches could be employed by the same person ; that is, they are not mutually exclusive. It basically depends on the problem at hand whether to use this or that approach.”
97In total, therefore, the references and methodological choices under quality adjustment are, according to both authors, not inconsistent with the microeconomic theory of the consumer.
98This article aimed to provide a balanced assessment of the factors leading to the creation of the Ottawa Group. Three main dynamics were identified: first, a push from Offices for National Statistics to form “city groups” when at a time the need for a close alliance between statisticians and academics was felt ; second, an effort to bolster international expertise in response to inflation measurement concerns following the Greenspan/Boskin period ; and third, the formation of a cohesive community focused on measuring the Consumer Price Index.
99Then, the qualitative analysis allowed us to identify two results (1) We first hold that quality is never finally defined as such, or else it is by the microeconomic theory: According to the international handbook: “the evaluation of the quality change is essentially an estimate of the additional amount that a consumer is willing to pay for the new characteristics possessed by the new quality” (CPI manual, chapter 1, 2004). Although subjectivity, and the « great art » (Armknecht and Moulton, 1995, 1)) in the treatment of quality are sometimes called, they are now absent from the debates. (2) We also note that the contributions of our corpus on the statistical treatment of quality introduce the idea of complexity in the statistical treatment of quality. As early as 1995, Armknecht and Moulton stated that « quality change is one of the most difficult problems facing price index practitioners” (Armknecht and Moulton, 1995). In 2024, this concern is still present and in similar terms: in its 2024 contribution, Diewert acknowledges that everything remains to be done in terms of treatment of quality adjustment, since in the “looking ahead” section, he mentions that “quality adjustment remains a hot research topic” (Diewert, 2024, 42). These widely shared findings suggest that little practical innovation and progress has been made. This leads Conflitti et al. (2022) to be surprised that no method has been consolidated since the Boskin Commission.
Strange enough that against this background, no standard benchmark for quality adjustment practices has emerged. The Boskin Commission itself mixed quality adjustment bias and bias from the introduction of new products, therefore applying very different techniques for the bias estimation for each product group: 12 assessing the consumer surplus by increased variety for food and beverages; back-of-the envelope calculations for quality differences in housing and new motor cars; comparison to hedonic indices for appliances and electronics ; comparison with matched-model approach for clothing; valuation of an increased service level for motor fuels; and extending the results of particular studies on medical treatments to the whole health sector. (Conflitti et al., 2022, 5)
100Finally, we identified, in the 1990s, two conflicting conventions around CPI measurement. Since then, theoretical approaches (like microeconomics and utilitarianism) have largely overshadowed empirical methods, with a strong focus on hedonic techniques, leading to forms of methodological convergence. However, this theoretical stance struggles with the increasing complexity of modern capitalism—marked by diverse products, intricate trade and pricing structures, and high costs in statistical processing.
101Today, statistical agencies also face pressures from new public management, leaving the question of quality treatment unresolved. On one hand, theoretical progress has stagnated, while on the other, economic and practical challenges remain. We hope that further analysis of the recently constructed database will provide insights that can refine this preliminary and somewhat pessimistic conclusion.
The authors would like to extend their warmest thanks to those involved in the history of the Ottawa Group for their contributions, and to the article’s reviewers. They would also like to thank the Normandy Region for funding the EQAM Chair.