Navigation – Plan du site

AccueilNuméros13-3The Computerization of Economics....The Computer and the Statistician...

The Computerization of Economics. Computers, Programming, and the Internet in the History of Economics

The Computer and the Statistician. Computerization and the Mobilization of Scanner Data by National Statistical Institutes for the Construction of Price Indexes

L’informatisation et la mobilisation des données de caisse par les instituts nationaux de statistiques dans la construction des indices de prix
Julien Gradoz
p. 849-876

Résumés

Cet article étudie les conséquences de la disponibilité des données de caisse, résultant de l’informatisation de nombreux magasins, sur les pratiques de construction des indices de prix au sein des instituts nationaux de statistiques. En particulier, cet article détaille le changement de méthodologie dans la construction des indices de prix permis par la disponibilité des données de caisse et étudie les obstacles pouvant expliquer la faible mobilisation des données de caisse par les instituts nationaux de statistiques.

Haut de page

Texte intégral

1The choice of a price index which will measure the evolution of prices is a central issue for national statistical institutes (NSIs), notably because in many countries the minimum wage or retirement benefits are indexed to the evolution of prices. This choice depends on complex theoretical, political and institutional factors (Eurostat, 2018). It also depends on data available to NSIs and their capacity to process these data. When Irving Fisher compared hundreds price indexes formulas in order to determine an “ideal” price index, he included the time required to compute price indexes in his criteria (1922, chapter XV). The diffusion of computers in society, a process often labelled computerization (e.g., Kling and Iacono, 1988), transformed data available to NSIs, but also the computing capacities available to process these data. This article analyzes the impact of computerization on the construction of price indexes by NSIs. The aim of this article is not to attenuate the role of theoretical, political and institutional factors, but simply to highlight a factor which has relatively been neglected in the price index literature, that is, the way scanner data impacted the construction of price indexes by NSIs. The availability of scanner data resulted from the computerization of retail stores. These data are

… electronic records of transactions that establishments collect as part of the operation of their businesses. The most familiar and now ubiquitous form of scanner data is the scanning of bar codes at checkout lines of retail stores (Feenstra and Shapiro, 2003a, 1).

2Compared to data traditionally collected by NSIs and used to compute price indexes, scanner data have several advantages. First, they automatically record transactions with detailed information about the product (for instance its characteristics), its price, the quantity sold and the moment of the transaction. Therefore, they are often presented as a way to reduce the cost of data collection and the “administrative burden” for NSIs (Chessa et al., 2017, 1). Moreover, with the development of loyalty cards, scanner data can be matched with data about consumers’ characteristics (Felgate et al., 2012). Second, due to their massive nature, scanner data partially remove sampling errors that characterized traditional methods of data collection, based on the construction of a representative but restricted sample (Feenstra and Shapiro, 2003a, 3). The reduction of sampling errors allows the construction of more detailed price indexes, for instance price indexes for specific cities. As a counterpart, scanner data require more storage and computing capacities to be processed. For instance, the French NSI receives 1.7 billion scanner data monthly (Leclair, 2019, 62). Finally, due to their electronic nature, scanner data provide continuous and quasi-instantaneous information about transactions. This is a central issue for NSIs: in order to compute specific price indexes without time lag, like the Paasche or the Fisher price indexes, information about quantities sold during the current period is required.

  • 1 According to Chessa and Griffioen (2019, 52), by 2019, only ten countries in Europe had explicitly (...)

3Between the 1990s and the 2000s, scanner data were considered as a promising opportunity for NSIs, although few scanner data were available. Moreover, the storage and computing capacities required to process these data were not accessible to most scholars and NSIs. Consequently, articles published during this period focused on a specific product (Lowe, 1998; Prud’homme, et al., 2005) or discussed the methodological issues associated with scanner data (Silver, 1995). Conversely, since the 2010s, due to the newly available storage and computing capacities, and a general reflection about big data, several NSIs have explicitly integrated scanner data into the construction of price indexes (Leclair et al., 2019). However, this integration is still limited to a few NSIs.1 This article proposes several justifications to explain the current low mobilization of scanner data by NSIs, despite the advantages of such data. Our analysis focuses on the debates concerning scanner data that occurred in NSIs over the 2000-2022 period. It mainly relies on the literature about scanner data produced by NSIs, but also on the scholarly literature about scanner data, sometimes sponsored by NSIs, and notably by the Bureau of Labor Statistics in the United States (National Research Council, 2002; Feenstra and Shapiro 2003) and by the INSEE in France (Économie et Statistique, 2019; Leclair, 2019).

4The proposed analysis follows the recent literature in the history of economic thought that has studied the impact of computerization on economists’ practices. This literature highlighted two important results. The first result is that computers must not simply be considered as extending the data and the computing capacities available to economists. Computers fundamentally change the practice of economists, and the methodology they use to study specific issues (Backhouse and Cherrier, 2017; Boumans et al., this issue). For instance, if the availability of computers generated new types of data through simulations (e.g., Diss and Kamwa, 2020), the use of simulations in economics also transformed the type of models mobilized to describe agents’ behavior, for instance through the development of agent-based modeling (e.g., Wellman, 2020). Likewise, if the availability of computers increased the computing capacities available to economists, and notably allowed the use of big data in econometrics (e.g., Einav and Levin, 2014), the use of big data challenged the traditional definition of “significance,” because the volume of data processed is such that most results produced with these data are significant (Taylor et al., 2014, 6). Consequently, economists had to develop new statistical indicators to evaluate the quality of their results, contributing to an important methodological shift in econometrics. The second result is that computerization is not a natural phenomenon. Notably, the existence of a technology does not imply that it will be adopted by economists, nor that they will exploit the opportunities offered by this technology. For instance, even if modern computers are able to produce large-scale simulations, economists have globally shunned this technique (e.g., Lehtinen and Kuorikoski, 2007). Likewise, when digital computers were created, economists continued to produce simulations on electro-analogic computers, due to the high cost of digital computers, and they only adopted digital computers for simulations several years later (e.g., Raybaut, 2020, 311).

5If this literature provided insightful results about the impact of computerization on the practice of economists, it mainly focused on academic economists. This article demonstrates that these two results—that computerization changes economists’ practices, and that the existence of a given technology does not entail its adoption by economists— can be applied to the mobilization of scanner data in the construction of price indexes by NSIs. Therefore, this article also contributes to the price index literature, by analyzing the impact of computerization on the construction of price indexes by NSIs.

6The first section analyzes the methodological shift allowed by the availability of scanner data by focusing on the possibility of computing cost-of-living indexes in almost real time. Other methodological aspects could have been discussed, but the possibility of computing cost-of-living indexes in almost real time is the most representative example of this methodological shift. This is an illustration of the first result discussed previously. The second section details the technical constraints which can explain the current low mobilization of scanner data by NSIs. These technical constraints are separated in three aspects: an infrastructural aspect, an automation aspect and an outsourcing aspect. These three aspects can explain the gap between the early availability of scanner data and the fact that many NSIs have only recently shown their willingness to adopt them in the near future. This is an illustration of the second result discussed previously.

1. Scanner Data and Cost-of-Living Indexes

7The availability of scanner data, resulting from the computerization of retail stores, offered to NSIs the opportunity of computing cost-of-living indexes in almost real time, something that was not possible with traditional methods of data collection. However, this opportunity has globally been shunned by NSIs, notably due to the multiple controversies surrounding the notion of “cost-of-living indexes.” In this section, we first explain the difference between the cost-of-goods and the cost-of-living approaches to the construction of price indexes. Then, we detail the link between the availability of scanner data and the possibility of computing cost-of-living indexes in almost real time. Finally, we highlight several reasons explaining why cost-of-living indexes have globally been shunned by NSIs. Price indexes are a type of index numbers which:

… are used to aggregate detailed information on prices and quantities into scalar measures of price and quantity levels or their growth (Diewert, 2008, 6212).

8Depending on what NSIs aim to measure, different price indexes will be constructed. There exist two main approaches to the construction of price indexes: the cost-of-goods approach and the cost-of-living approach. The cost-of-goods approach aims “to measure the effects of price changes on the cost to a household of purchasing a specified basket of goods and services” (National Research Council, 2002, 16). In this approach, a basket of goods is specified, defined as a vector of the quantities consumed of a subset of products available in the economy. Then, this basket of goods is associated with a price index. Finally, this price index is used to compare two periods or geographical areas, by assuming that the basket of goods is the same between the two periods or geographical areas.

9This approach is used by many NSIs to measure inflation, by extrapolating the evolution of the price index as a more general evolution of the price level of the economy. Notably, Laspeyres and Paasche indexes are compatible with the cost-of-goods approach. Denote Lt/0 the Laspeyres index between period t and period 0 and Pt/0 the Paasche index between period t and period 0. The basket of goods is composed of n products. Pi(t) is the price of product i in period t and Qi(0) is the quantity of product i sold in period 0. We have:

Image 10000201000001E20000008225F845629447E51E.png

10And

Image 10000201000001E9000000748A45ABBCAB647933.png

  • 2 For instance, the chained Laspeyres index is used by the French NSI to measure the evolution of pri (...)
  • 3 The book, supported by the Bureau of Labor Statistics, proposed a state of the art concerning the c (...)

11For these two price indexes, products included in the basket of goods are the same between the two periods, but also the quantity consumed of each product, as we focus either on Qi(t) or on Qi(0). Laspeyres index is generally used by NSIs to measure the evolution of prices because Paasche index requires information about the quantities sold during the current period to be computed, which is generally not available immediately.2 In the book At What Price? Conceptualizing and Measuring Cost-of-Living and Price Indexes, the National Research Council (2002, 24) highlighted that price indexes requiring information on Qi(t), like the Paasche index, are generally computed with a two-years lag.3

12The existence of this time lag is a central issue for NSIs. As a matter of fact, in many countries, the level of the minimum wage and other macroeconomic aggregates are indexed to the evolution of prices. Consequently, an imperfect measure of the evolution of prices has a direct impact on households (Silver and Heravi, 2001). This situation explains why price indexes cannot be computed with an important time lag. For instance, if an economy faces a sudden and important variation of prices, it is necessary to adjust macroeconomic aggregates quickly as a response to this shock. If price indexes are computed with a two-years lag, this variation cannot be taken into account. Scanner data precisely circumvent this issue because they provide quasi-instantaneous information about transactions (Tongur, 2019, 35). For instance, the French NSI receives scanner data with a two-days lag (Leclair, 2019, 63).

  • 4 For a detailed analysis of these non-price factors, see Jany-Catrice (2019).

13Ideally, the evolution of the price index between two periods should exclusively reflect the evolution of the price of the products included in the basket of goods. In practice, this is not necessarily the case, as “non-price” factors are also likely to influence price indexes, like the evolution of products’ characteristics, the appearance of new products or the evolution of consumers’ taste.4 Notably, the cost-of-goods approach hardly takes into account substitution effects, namely the idea that consumers are likely to modify their basket of goods between two periods as a consequence of the evolution of prices.

14Conversely, the cost-of-living approach is often presented as a way to integrate substitution effects into the construction of price indexes. This approach aims to measure the effects in terms of the cost of maintaining the household’s standard of living at some specified level when prices evolve (National Research Council, 2002, 16). This definition is less clear than the previous one because the notion of “standard of living” is blurry. The cost-of-living approach depends dramatically on the way standard of living is defined in the price index literature. As underlined by Thomas Stapleford (2011, 187), the notion of standard of living has been defined by most economists through consumers’ utility functions since the 1940s. This “utility-based” definition of standard of living mobilizes the tools of consumer theory and relies either on a cardinal or an ordinal interpretation of consumers’ utility functions. In its modern formulation, the cost-of-living approach relies on an ordinal interpretation of consumers’ utility functions and is based on the idea that, when prices evolve, consumers substitute products, in order to minimize the cost of reaching a given level of utility (Hill, 2006, 26). To put it differently, this modern formulation focuses on the hypothetical variation of income that would be required to let consumers remain on the same indifference curve when they are confronted to two different sets of prices. This article focuses on this modern formulation, but a useful history of the cost-of-living approach can be found in Stapleford (2011).

15A concrete issue with the cost-of-living approach is that, in order to measure this hypothetical variation, formulas derived from consumer theory have to be made compatible with existing price indexes. As a matter of fact, formulas derived from consumer theory rely on the expenditure function of consumers, sometimes referred to as their “cost function,” which is not directly measurable—while price indexes rely on price and quantity data. Therefore, it is necessary to derive cost functions which can be estimated with price and quantity data. An important contribution for addressing this issue was provided by Walter Erwin Diewert (1976; 1978), through the notion of “superlative indexes”:

  • 5 The Törnqvist index is defined as:

Under the assumption of utility-maximizing behavior, certain observable price indexes exactly equal the cost-of-living index if the underlying cost function has a particular functional form. For example, the Törnqvist price index is exact for the translog cost function.5 Diewert reasoned that we should prefer those price indexes that are exact for flexible cost functions (i.e., cost functions that can approximate an arbitrary twice continuously differentiable linearly homogeneous positive function to the second order). He defined such price indexes as superlative. Superlative indexes approximate to the second order the underlying cost-of-living index (Hill, 2006, 27).

16To put it differently, superlative indexes are price indexes which exactly estimate cost-of-living formulas derived from specific consumers’ preferences (International Monetary Fund et al., 2004, 313). Along the Törnqvist index, one could also mention the Fisher index, which estimates cost-of-living formulas derived from homogeneous quadratic preferences (van Veelen and van der Weide, 2008). The Fisher index is defined as the geometric mean of Laspeyres and Paasche indexes:

Image 100002010000010A0000005361F6CCABC064F27A.png

17Irving Fisher qualified this price index as an “ideal index” (1922), notably because it “incorporates information about consumer spending patterns from both the base and comparison periods” (National Research Council, 2002, 23). To be more specific, Irving Fisher adopted an axiomatic approach to the construction of price indexes, by defining a list of “tests” that price indexes should pass to be considered as desirable indexes (Balk, 1995). The Fisher index is “ideal” according to the number of tests passed (Banzhaf, 2001). Notably, Irving Fisher considered the time needed to effectively compute price indexes in his criteria, and the Fisher index required less time to be computed than other “ideal” candidates (Fisher, 1922, 325). Moreover, from a superlative index perspective, the Fisher index is an “exact” estimation of cost-of-living formulas derived from specific consumers’ preferences.

18To summarize, superlative indexes offer a way to estimate specific cost-of-living formulas with transaction data (Richardson, 2003, 40). However, the long-standing theoretical existence of these indexes did not imply that they were effectively computed by NSIs. On the one hand, many NSIs have been reluctant to adopt the cost-of-living approach, notably because it relies on consumers’ expenditure functions, which are not directly measurable. Triplett (2001) lists NSIs which have rejected the cost-of-living approach for the construction of price indexes, like the Australian NSI and Eurostat, and explicitly mentions the role of consumer theory in this rejection:

The rationales of statistical agencies for rejecting the COL index mention, at some point, the ambiguity or etherealness of the idea of a constant utility price index, with the implication (and sometimes an explicit claim) that the idea of estimating a COL is both unrealistic empirically and ill-defined conceptually (Triplett, 2001, 314).

  • 6 The Boskin commission, also called “Advisory Commission to Study the Consumer Price Index”, was a c (...)

19We can also mention the role of rhetoric in this rejection. As a matter of fact, the cost-of-goods approach, despite numerous biases, is easier to explain to non-economists than the cost-of-living approach, which relies on the concepts of consumer theory. Among NSIs which have adopted the cost-of-living approach, we can mention the Bureau of Labor Statistics in the United States (BLS). The adoption of the cost-of-living approach by the BLS is generally associated with the recommendations of the Boskin commission (1995),6 although the BLS had been already experimenting this approach for several years (Stapleford, 2011, 212). The adoption of the cost-of-living approach contrasts with the strong criticisms against it, formulated by the BLS itself during the 1960s and the early 1970s. According to Stapleford (2011, 211), the new position of the BLS can be explained by the ordinal reformulation of the cost-of-living approach, which was more acceptable for the BLS, notably because it did not require to assume that consumers’ tastes were identical between two periods. Moreover, the progressive renewal of the BLS members, with the recruitment of economists who contributed to this ordinal reformulation (such as Robert Pollak) facilitated the adoption of the cost-of-living approach by the BLS (ibid., 212). Therefore, the theoretical, political and institutional factors played an important role in the adoption of the cost-of-living approach by the BLS. However, this article focuses on the role of computerization, less studied in the price index literature, and therefore does not insist on these factors.

20On the other hand, data required to compute superlative indexes were not necessarily available to NSIs. As mentioned previously, Paasche index requires information about the quantities sold during the current period to be computed, which is generally obtained with a time lag. As the Fisher index is a geometric mean of Laspeyres and Paasche indexes, its computation faces the same issue (National Research Council, 2002, 43). In fact, most superlative indexes require information about quantities sold during the current period to be computed, like the Törnqvist index. In this perspective, the availability of scanner data appeared as an opportunity to effectively compute superlative indexes:

  • 7 Likewise, “calculating a constant-utility index is a tricky affair: it is necessary to identify a u (...)

For many years, price theorists have written of “superlative” or “ideal” price indexes more as a concept than as a reality, since current period quantities have generally not been available. In a scanning environment, this restriction no longer exists (Feenstra and Shapiro, 2003a, 21).7

  • 8 A similar point was formulated by members of the BLS in a presentation in front of the same group i (...)

21This remark is in line with the first result highlighted in the literature about computerization of economics. As a matter of fact, the availability of scanner data, through the computerization of retail stores, offered the possibility of a methodological shift in the price index literature, namely the possibility of computing superlative indexes in almost real time. Consequently, during the 1990s and the 2000s, when scanner data began to be available to NSIs, several scholars and NSIs were enthusiasts about the possibility of computing superlative indexes (National Research Council, 2002, 43; Richardson, 2003, 64). Notably, NSIs which have adopted the cost-of-living approach to the construction of price indexes were early adopters of scanner data. The Dutch NSI, Statistics Netherlands, which adopted scanner data in 2002, is a case in point. During a presentation in 2001, at “The International working group on price indices,” a member of this NSI underlined that “Statistics Netherlands has always been, and still remains, one of the advocates of using the [cost-of-living] framework” (de Haan, 2001, 3), and he detailed how scanner data offer the opportunity to compute superlative indexes.8

22Another good illustration of this enthusiasm is the book Scanner Data and Price Indexes, edited by Robert Feenstra and Matthew Shapiro (2003), which gathers several papers presented during the annual Conference on Research in Income and Wealth (2000), supported notably by the BLS. Many specialists of price indexes participated to this conference, like Diewert, and this book in one of the main references concerning scanner data written in the 2000s. This book extensively discusses the methodological shift allowed by the availability of scanner data and also insists on the necessity to adapt models from which cost-of-living formulas are derived in order to integrate scanner data. For instance, Feenstra and Shapiro (2003b) highlighted that models used at the time did not take into account consumers’ reaction to advertising and rebates. However, if we assume that consumers have the possibility to store the products they purchase, then they are likely to buy products during rebate-weeks, and do not buy these products the following weeks. If price indexes are computed monthly, this behavior does not necessarily have to be taken into account. However, if price indexes are computed weekly, then it is necessary to construct a model which considers this intertemporal substitution, in order to derive cost-of-living formulas consistent with high-frequency computations. As scanner data are produced quasi-continuously, they precisely offer the possibility of computing weekly price indexes, and Feenstra and Shapiro suggested a model adapted to these computations. This situation illustrates that computerization has not simply increased the computing capacities and data available to NSIs, but also offered the possibility of a methodological shift in the construction of price indexes, with the development of new models and the possibility of computing superlative indexes.

23If many topics are discussed in the Feenstra and Shapiro’s volume, we can note that the technical constraints associated with the effective mobilization of scanner data by NSIs are hardly mentioned. Computerization is perceived as a natural phenomenon, with the idea that when the technology is available, NSIs will integrate scanner data into the construction of price indexes. However, as mentioned in the introduction, the existence of a technology does not imply that this technology will be effectively adopted by NSIs. In the next section, we detail several factors that can explain the current low mobilization of scanner data by NSIs despite their availability.

2. The Technical Constraints Associated with the Mobilization of Scanner Data by NSIs

24A necessary condition for the mobilization of scanner data in the construction of price indexes is the adoption of electronic systems by retail stores, but also the acquisition of storage and computing infrastructures by NSIs to process these data. As a matter of fact, if the technology to process scanner data does not exist, NSIs will obviously not mobilize scanner data. However, the existence of this technology does not imply that NSIs will effectively adopt this technology, nor that they will mobilize scanner data for the construction of price indexes. This idea is in line with the second result highlighted in the literature about computerization of economics. In this section, we detail the possible reasons for the (still) low mobilization of scanner data in the construction of price indexes by NSIs.

  • 9 Some articles have described scanner data as a “flood of information” (Feenstra and Shapiro, 2003a, (...)
  • 10 Scanner data are massive but not necessarily exhaustive (Mehrhoff, 2019, 11). For instance, service (...)
  • 11 This tripartition partially overlaps the one proposed by Leclair (2019).

25To do so, it is useful to conceive scanner data as big data (Tassi, 2018).9 The notion of big data is sometimes defined according to three properties, namely “volume, variety and velocity” (Taylor et al., 2014, 5). Scanner data conform at least to two properties, since the “variety” property is sometimes contested (Leclair, 2019, 69). Quite indisputably, scanner data represent massive data about transactions, and their volume constitutes a rupture with the traditional sampling approach used by NSIs to produce data (Léonard et al., 2019, 74).10 Therefore, they require specific storage and computing infrastructures to be processed. For instance, concerning the breakfast cereals market, Hawkes and Piotrowski (2003, 19) underline that compared to a traditional sampling strategy, the volume of scanner data is about 1000 to 1. For a one-month period in the United States, scanner data on breakfast cereals represent 1.800.000 observations of prices, and “these price records are, in every case, accompanied by actual quantities sold, each week, in each supermarket for each item” (Hawkes and Piotrowski, 2003, 19). These massive data reduce sampling errors in the computation of price indexes. For instance, Leaver and Larson (2001) used scanner data about cereal transactions in New York and compared the price indexes obtained through scanner data with those obtained through a traditional sampling approach, and they “show [a] reduction in the standard error … by a factor of about 6” (National Research Council, 2002, 268). The reduction of sampling errors allows the construction of sub-indexes which could be used to compare different groups or geographical areas (Einav and Levin, 2014, 2; Leclair, 2019, 64). For instance, with scanner data, it becomes possible to compare the consumer price index of rural areas with the one of urban areas (Li, 2019), the consumer price index of poor people with the one of rich people (Faber and Fally, 2022), or even the consumer price index of rich people living in poor cities with the one of rich people living in rich cities (Handbury, 2021). Breakfast cereals represent one possible product included in the basket of goods. If we consider all products used to compute price indexes, the number of scanner data quickly becomes huge. For instance, Leclair and co-authors highlight that the French NSI receives 1.7 billion scanner data observations monthly (Leclair et al., 2019, 14). Consequently, the mobilization of scanner data in the construction of price indexes raises three main issues: an infrastructural issue, an automation issue and an outsourcing issue.11

2.1 The Infrastructural Issue

  • 12 A relational database is a “database in which all data are represented in tabular form. The descrip (...)
  • 13 We can note that The Office for National Statistics (2019) acquired more capacities than currently (...)

26To process such volume of data, the use of traditional non-distributed relational databases is not possible (Leclair et al., 2019, 18).12 Consequently, a distributed computing infrastructure is required, in which several computers are connected together in order to realize specific tasks. This distributed infrastructure is assimilated to a cluster of computers which communicate through a shared network and pool their computing and storage capacities. For instance, The Office for National Statistics (the NSI of United Kingdom) uses a cluster of 57 nodes, in which “each node is a very powerful computer with processing power delivered by 40 cores, memory size of 1 terabyte (TB) and storage capacity 28 TB” (The Office for National Statistics, 2019). These storage and computing capacities are incommensurable with those required to process traditional data collected from a sampling strategy.13

27With a distributed computing infrastructure, scanner data are split in a list of blocks, and these blocks are stored on the different computers of the cluster, with a “namenode” block linking all the individual blocks together. The main difficulty is not to design a distributed computing infrastructure. As a matter of fact, this type of infrastructure has existed since the 1980s (e.g., Zamoya, 1996). The main difficulty is to conceive algorithms performing specific statistical operations when data are split in different blocks. For instance, if we want to compute an arithmetic mean, it is possible to compute the sum of values contained in each block, then to aggregate the sums in order to obtain the sum of all the values contained in the dataset, then to divide this sum by the total number of observations. To put it differently, it is possible to locally compute a sum, then to compute the global sum from the sums obtained for each block. However, for many statistical operations, such as those involving joining data tables, this approach is not possible. Consequently, in order to implement these statistical operations, distributed algorithms have to be developed, which aim is to split a statistical operation into a list of sub-operations that can be processed in the different blocks of the cluster.

28The development of the MapReduce model by Google renewed the way distributed algorithms were conceived, especially when they were used to process big data (Dean and Ghemawat, 2008):

MapReduce automatically parallelizes and executes the program on a large cluster of commodity machines. The runtime system takes care of the details of partitioning the input data, scheduling the program’s execution across a set of machines, handling machine failures, and managing required inter-machine communication. … A typical MapReduce computation processes many terabytes of data on hundreds or thousands of machines (Dean and Ghemawat, 2010, 72).

  • 14 A cost argument can also be mentioned, as the MapReduce model allows horizontal scaling (McCreadie (...)

29An important feature of the MapReduce model is that it relies on a “master-slave architecture” (Marozzo et al., 2012). With such architecture, data can be processed even if a node of the cluster encounters a technical problem during data processing. This is a central issue for NSIs, because when a cluster is composed of dozens of computers, a technical problem is likely to occur on at least one node during data processing. The possibility to continuously process data with a limited risk of technical failure was a necessary condition for the effective mobilization of scanner data by NSIs (Leclair et al., 2019, 18). Consequently, even if the MapReduce model was not primarily developed for scanner data processing, it offered the possibility to compute price indexes with scanner data.14

30However, we can note that the MapReduce model uses a low-level programming language, which could represent a barrier for NSIs. As a matter of fact, mastering distributed computing infrastructure and distributed algorithms requires an adequate training in data science, which most statisticians working in NSIs had not received when this technology appeared. They were used to work with relational databases, and to write requests in high-level programming languages, such as SQL. A 2015 survey about the big data skills of NSIs’ employees, presented to participants of an international conference about the production of official statistics, highlighted that only 37% of respondents had already worked with big data, showing that “at present there is insufficient training in these skills” (Vale, 2015, 162).

31A few years after the creation of the MapReduce model, several softwares were developed in order to simplify the use of this model, especially through the use of higher-level programming languages. Notably, this is the case of the Hadoop framework (2006), written in Java (while relying on the MapReduce model), and which is currently used by most NSIs which mobilize scanner data in the construction of price indexes, like France or United Kingdom (Vale, 2015; The Office for National Statistics, 2019; Leclair et al., 2019). Nowadays, the Hadoop framework has been completed with several softwares, and notably Apache Spark (2014), which reduces the time required to process large datasets with distributed algorithms (Alsheikh et al., 2016). In parallel with this development, statisticians working in NSIs have gradually been trained to manipulate distributed algorithms (Reinhart and Genovese, 2021). Moreover, newly recruited statisticians are more likely to have received training in data science, a situation contributing to the diffusion of scanner data in NSIs. For instance, the École nationale de la statistique et de l’administration économique (ENSAE), the school which historically trained statisticians working for the French NSI (Desrosières, 2013) has recently introduced a course about the use of Spark for big data processing.

32In a nutshell, the mobilization of scanner data by NSIs requires complex and costly infrastructures. Even if scanner data were available since the 1990s, and distributed algorithms capable of processing them since the mid-2000s, the effective mobilization of scanner data for the construction of price indexes really began in the mid-2010s. For instance, Marie Leclair, former chief of the French NSI price indexes division, highlighted that first experimentations with scanner data began in France in 2010, while the effective computation of a consumer price index with scanner data only happened in 2020 (Leclair, 2019, 68; Jany-Catrice, 2020, 112). The mobilization of scanner data relied on the acquisition of infrastructures by NSIs, the adequate training of statisticians and the development of software simplifying the use of distributed algorithms. The combination of these factors can explain the current low mobilization of scanner data by NSIs, but also why most of them have shown their willingness to mobilize scanner data in the near future (Eurostat, 2017, 3).

2.2 The Automation Aspect

33In addition to the infrastructural aspect, the automation aspect must also be considered. As a matter of fact, once NSIs have acquired the appropriate infrastructures to process scanner data, they face new problems associated with these massive data (Triplett, 2003, 151). Notably, most tasks that were realized manually in a sampling approach must be automated with scanner data. An often-discussed example in the price index literature is the automatic classification of products. Price indexes are constructed according to a general classification of products, for instance, in the European Union, according to the ECOICOP classification (Eurostat, 2018). This classification allows the construction of aggregated price indexes, like the breakfast cereals price index mentioned previously. When NSIs’ surveyors collect information about prices, they also manually associate each particular product with one of the general categories of a general classification, based on their expertise (Leclair et al., 2019, 28). This classification operation is not necessarily performed by retail stores electronic systems and, henceforth, scanner data do not necessarily associate prices to general categories of products.

  • 15 Auer and Boettcher (2017, 13) list five solutions, but some of them are currently not used by NSIs.
  • 16 “Text mining is defined as the process of extracting the implicit knowledge from textual data. Beca (...)
  • 17 Errors can be assessed through a manual examination of a sample extracted from classified scanner d (...)

34To address this issue, NSIs have three solutions, as detailed by Marie Leclair (2019).15 First, one can extract a sample from scanner data, manually classify the products, then compute price indexes. This solution, notably used by the Italian NSI (ibid., 71), is very close to the traditional sampling approach. Second, NSIs can buy a bar codes dictionary, which associates each bar code with detailed information about the product’s characteristics. Then, based on these characteristics, products can be associated with a category of a general classification (like ECOICOP for European Union’s NISs). This solution is notably used by the French NSI. Finally, NSIs can mobilize automatic classification methods (Mehrhoff, 2019, 7), which often rely on text mining techniques.16 When retail stores provide scanner data to NSIs, there are often no precise indications about the adequate category to which the product belongs to. This is an implicit knowledge that must be discovered in the data. However, scanner data are often provided with a product’s name and sometimes short descriptions. Therefore, through the use of text mining techniques, NSIs can infer the category of the product based on the textual elements provided by retail stores. Current automatic classification methods are relatively powerful, even if they will always produce classification errors.17

The classifying procedure, especially if parts of it are fully automated, may lead to an incorrect classification of a few items codes to ECOICOP. Building procedures that are 100% water-tight is not a practicable solution to this, as the cost would be prohibitive (Eurostat, 2017, 23).

35If automatic classification methods can be used to associate products with more general categories, they can also be used to identify products associated with several bar codes. As a matter of fact, a bar code is associated with a unique product. However, the same product can be associated with several bar codes. For instance, when manufacturers change the packaging of their product, they often generate a new bar code for the product (Leclair et al., 2019, 27). Likewise, if a product is manufactured in two different factories, the use of two bar codes can help manufacturers to identify the production site, even if the product is identical (Jany-Catrice, 2020, 114). The evolution and the heterogeneity of bar codes is a central issue for NSIs. For instance, if they adopt a fixed basket approach, they must be sure that products included in the basket of goods are available during the first and the second period. If bar codes change, the comparability of the basket of goods is not guaranteed. Therefore, detecting products for which bar code has changed is a prerequisite for the construction of price indexes based on scanner data. Automatic classification methods can be used to detect these situations.

36The evolution of bar codes is analytically similar to the appearance of new products, a topic which has extensively been discussed in the price index literature (for a review, see Armknecht et al., 1997). Therefore, an analogy can be drawn between the two situations. When new products appear on the market, two solutions are available for NSIs. First, new products can be omitted, a situation which causes biases in price indexes, because the consumption structure is likely to change between the first and the second period (Hausman, 2003). Second, new products can replace products initially included in the basket of goods. However, this solution requires “quality adjustments.” For instance, if a more durable washing machine appears on the market, it can replace previous washing machines included in the basket of goods. However, in order to be sure that durability variation will not be assimilated to a “price variation,” it is necessary to adjust the price according to the durability of washing machines, through hedonic regressions or explicit adjustments (Moulton and Moses, 1997; Triplett, 2006). When a product is associated with a new bar code, it is identified as a new product in the database. Facing this situation, NSIs generally mobilize the second solution, like the French NSI (Leclair, 2019, 66). To put it differently, the product with the new bar code replaces the product with the old bar code in the basket of goods, in order to ensure the continuity of the data.

37However, to do so, NSIs must be able to detect situations in which two different bar codes correspond to the same product, and this ability depends, again, on the performance of algorithms used by NSIs. Moreover, if quality adjustments are required, they must also be automated, and the “quality” of “quality adjustments” will depend on the performance of algorithms. Consequently, the mobilization of scanner data for the construction of price indexes dramatically relies on the capacity of NSIs to automate specific tasks, and on the performance of algorithms used to realize these tasks.

2.3 The Outsourcing Aspect

  • 18 Based on the distinction proposed by Hox and Boeije (2005).
  • 19 Interestingly, if NSIs can be suspicious of the private sector, NSIs themselves can be accused of m (...)
  • 20 For a detailed analysis of the way quality of scanner data is assessed by NSIs, see Auer and Boettc (...)

38Finally, an outsourcing aspect must be considered, namely the idea that the mobilization of scanner data requires that NSIs outsource a part of their activity. This is related to the nature of scanner data. As a matter of fact, scanner data are collected by private firms during their business operations. Consequently, they are not primarily collected for the construction of price indexes, but in order to establish firms’ marketing or inventory strategies (Braaksma and Zeelenberg, 2015, 194). This situation explains why the private sector began to exploit scanner data several years before NSIs (Hawkes and Piotrowski, 2003, 18). Then, scanner data are eventually transferred to NSIs in order to compute price indexes. Therefore, in the Practical Guide for Processing Supermarket Scanner Data, Eurostat (2017, 5) qualifies scanner data as “secondary data source,” namely “data originally collected for a different purpose and reused for another research question”.18 As secondary data source, scanner data are not necessarily formatted according to the standards of NSIs, implying an important pre-processing work for them (Landefeld, 2014, 4). Moreover, NSIs must assess the quality of scanner data and notably ensure that scanner data have not been manipulated by firms.19 To do so, several NSIs have maintained a manual collection of some prices, realized by surveyors in retail outlets, in order to be sure that they are identical with the one collected with scanner data (Leclair, 2019, 69).20

39Due to their very nature, scanner data require that data collection is delegated to the private sector. This situation represents a rupture with the traditional sampling approach, in which data collection is realized by surveyors working for NSIs:

  • 21 An interesting issue, which is not addressed in this article, is the reconfiguration of the divisio (...)

The dependence on a retailer is greater than in the past when a retailer only had to give a price collector permission to visit the outlet. … With scanner data, the dependence on the retailer increases substantially (Eurostat, 2017, 11).21

40The delegation of scanner data collection requires to create long-term relationships with private firms (Vale, 2015, 161). As a matter of fact, if NSIs want to produce monthly price indexes, they must be sure that private firms will continuously provide their scanner data (Struijs et al., 2014, 3). Consequently, in the 2000s, many articles insisted on the necessity to create trustworthy partnerships with providers of data, in order to avoid discontinuities in the production of price indexes (Feenstra and Shapiro, 2003, 6; Richardson, 2003, 49). However, the viability of these partnerships is not guaranteed, and in the past some retail stores ceased to provide their scanner data (National Research Council, 2002, 271). The possibility of discontinuities in data collection explains why Eurostat recommends establishing contracts with private firms in order to ensure the provision of scanner data (Eurostat, 2018, 81). Moreover, some countries adopted laws which oblige firms to provide their scanner data to NSIs. This is notably the case in the Netherlands (Chessa and Griffioen, 2019, 52) and in France (Leclair et al., 2019, 30), where refusals are associated with “heavy fines, the amounts ranging from 25,000 euros for the first refusal to 50,000 euros for any subsequent refusal” (Jany-Catrice, 2020, 113). The public status of NSIs played a central role in the possibility of adopting these laws, and therefore in the possibility of integrating scanner data in the construction of price indexes.

41Several factors can explain the initial reluctance of private firms to provide their scanner data to NSIs. First, some firms may fear a possible leakage from NSIs. This idea was notably mentioned in a paper presented during the Partnership in Statistics for Development in the 21st Century conference in 2015 (Robin et al., 2015, 10). If such leakage happens, scanner data could be used by firms’ competitors, explaining their reluctance to share these data. As some scanner data can be matched with consumers’ characteristics, notably through loyalty cards, such leakage could also negatively impact the reputation of firms. Consequently, NSIs had to reassure private firms (Eurostat, 2014, 7), invest in the security of their infrastructure and establish strong sanctions in case of leakage (Landefeld, 2014, 17). Moreover, in order to reduce the risk of leakage, some NSIs have established specific contracts with firms owning scanner data, so that scanner data processing is realized by these firms (Struijs et al., 2014, 3). For instance, in the case of the New Zealand NSI:

One possible approach is to outsource the analytics to the data owner. Statistics New Zealand is looking to do this with scanner data, as the data owner has the necessary computing infrastructure and performing the analysis where the data are stored are cheaper and easier. An added and important benefit of this approach is that the data owner does not need to share the underlying data, which may be very sensitive (Tam and Clarke, 2015, 443).

42Consequently, outsourcing scanner data processing can reduce the reluctance of private firms to share their scanner data, but can also reduce the cost of data processing (Robin et al., 2015, 9). However, this solution increases the dependency of NSIs toward the private sector.

43Secondly, firms may fear that detailed data about transactions will be used by fiscal authorities, resulting in possible prosecutions. To put it differently, even if scanner data should primarily be used for the construction of price indexes, some firms may fear that scanner data could be used for another purpose, as a consequence of similar past episodes:

Long-standing lack of trust in government in the area of privacy is a problem to such collaboration that has been exacerbated by recent highly public examples of governments’ accessing private “big data” for National Security reasons (with or without the private owner’s consent). While none of the recent transgressions, or indeed most past data breaches, have involved official statistical agencies, businesses and the public are inclined to mistrust government in general. Further, cases where official statistical agencies have made inappropriate use of confidential data have been quite public and likely tarnished the general reputation of statistical agencies around the world (Landefeld, 2014, 15).

44Consequently, the possibility of mobilizing scanner data in the construction of price indexes dramatically relies on the trust toward public institutions (Struijs et al., 2014).

45Finally, as a secondary data source, scanner data are primarily collected to establish firms’ strategy, and not to construct price indexes. Consequently, if firms find another way to achieve their objective, they will not necessarily rely on scanner data anymore. However, if they accept to provide their scanner data to NSIs, they will have to pursue scanner data collection, even if scanner data cease to be profitable for them. Consequently,

[if a partnership] can be beneficial to a company in the short term, it might involve costs in the long run. Indeed, given that private data is originally collected for non-statistical purposes, maintaining the extraction process can become a burden if the initial field of application loses relevance (Robin et al., 2015, 11)

46To summarize, if NSIs want to mobilize scanner data in the construction of price indexes, they have to outsource a part of their activity. Moreover, they have to be sure that private firms will continuously provide their scanner data, in order to avoid discontinuities in the construction of price indexes. In countries in which firms have no obligations to provide their scanner data, NSIs had to overcome the reluctance of these firms, by investing in the security of their infrastructure or by outsourcing scanner data processing to the owner of these data.

Conclusion

47In this article, we analyzed how the availability of scanner data, resulting from the computerization of retail stores, impacted the practices of NSIs concerning the construction of price indexes. According to the recent literature in the history of economic thought about the computerization of economics, we first demonstrated that the availability of scanner data had not simply increased the data available to NSIs, but also provoked a methodological shift in the price index literature. In this article, we focused on the possibility of computing superlative cost-of-living indexes in almost real time. Secondly, we demonstrated that despite the availability of scanner data since the 1990s, most NSIs have not mobilized them for the construction of price indexes, or have only recently begun to do so. Three factors explaining this low and late mobilization have been discussed. First, scanner data processing requires complex and costly infrastructures, which must be combined with a specific training of statisticians. Second, the mobilization of scanner data requires an important automation process, which constituted a rupture with the sampling approach traditionally used by NSIs. Finally, scanner data processing requires to outsource a part of NSIs’ activities, a situation which raises several issues, notably concerning the continuous provision of data. This article is a first step in the study of the impact of computerization on the production of official statistics. In future articles, a similar analysis could be realized for other types of data, like mobile data (Sakarovitch et al., 2018) or web-scrapped data (Mehrhoff, 2019). Likewise, a similar analysis could be applied to other indicators produced by NSIs, and notably GDP (Hassani and Silva, 2015).

I am grateful to the two anonymous referees for their insightful comments and to the participants of the workshop “The Computerization of Economics.” I thank Romain Avouac for his precious help during the writing of the article and Hugo Botton for his suggestions.

Haut de page

Bibliographie

Alsheikh, Mohammad Abu, Dusit Niyato, Shaowei Lin, Hwee-Pink Tan, and Zhu Han. 2016. Mobile Big Data Analytics Using Deep Learning and Apache Spark. IEEE Network, 30(3): 22-29. https://doi.org/10.1109/MNET.2016.7474340.

Armknecht, Paul, Walter Lane, Kenneth Stewart, and Frank Wykoff. 1997. New Products and the U.S. Consumer Price Index. In Timothy Bresnahan and Robert Gordon (eds), The Economics of New Goods. Chicago: Chicago University Press, 375-396.

Auer, Josef and Ingolf Boettcher. 2017. From Price Collection to Price Data Analytics. How New Large Data Sources Require Price Statisticians to Re-Think their Index Compilation Procedures. Paper presented at the fifteenth meeting of the International Working Group on Price Indices, Eltville am Rhein, Germany.

Backhouse, Roger and Beatrice Cherrier. 2017. “It’s Computers, Stupid!” The Spread of Computers and the Changing Roles of Theoretical and Applied Economics. History of Political Economy, 49(Supplement): 103-126. https://doi.org/10.1215/00182702-4166287.

Balk, Bert. 1995. Axiomatic Price Index Theory: A Survey. International Statistical Review, 63(1): 69-93. https://doi.org/10.2307/1403778.

Banzhaf, Spencer. 2001. Quantifying the Qualitative: Quality-Adjusted Price Indexes in the United States, 1915-61. History of Political Economy, 33(1): 345-370. https://doi.org/10.1215/00182702-33-Suppl_1-345.

Boskin, Michael, Ellen Dulberger, Robert Gordon, Zvi Griliches, and Dale Jorgenson. 1998. Consumer Prices, the Consumer Price Index, and the Cost of Living. The Journal of Economic Perspectives, 12(1): 3-26. https://doi.org/10.1257/jep.12.1.3.

Braaksma, Barteld and Kees Zeelenberg. 2015. “Re-Make/Re-Model”: Should Big Data Change the Modelling Paradigm in Official Statistics? Statistical Journal of the IAOS, 31(2): 193-202. https://doi.org/10.3233/sji-150892.

Bradley, Ralph, Bill Cook, Sylvia Leaver, and Brent Moulton. 1997. An Overview of Research on Potential Uses of Scanner Data in the U.S. CPI. Paper presented at the third meeting of the International Working Group on Price Indices, Voorburg, the Netherlands.

Cavallo, Alberto. 2013. Online and Official Price Indexes: Measuring Argentina’s Inflation. Journal of Monetary Economics, 60(2): 152-165. https://doi.org/10.1016/j.jmoneco.2012.10.002.

Chessa, Antonio, Johan Verburg, and Leon Willenborg. 2017. A Comparison of Price Index Methods for Scanner Data. Paper presented at the fifteenth meeting of the International Working Group on Price Indices, Eltville am Rhein, Germany.

Chessa, Antonio and Robert Griffioen. 2019. Comparaison des indices de prix des vêtements et des chaussures à partir de données de caisse et de données moissonnées sur le Web. Économie et Statistique, 509: 49-68. https://doi.org/10.24187/ecostat.2019.509.1984.

Congressional Research Service. 2013. The Chained Consumer Price Index: What Is It and Would It be Appropriate for Cost-of-Living Adjustments? Washington: Congressional Research Service.

de Haan, Jan. 2001. Generalized Fisher Price Indexes and the Use of Scanner Data in the CPI. Paper presented at the sixth meeting of the International Working Group on Price Indices, Canberra, Australia.

Dean, Jeffrey and Sanjay Ghemawat. 2008. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, 51(1): 107-113. https://doi.org/10.1145/1327452.1327492.

Dean, Jeffrey and Sanjay Ghemawat. 2010. MapReduce: A Flexible Data Processing Tool. Communications of the ACM, 53(1): 72-77. https://doi.org/10.1145/1629175.1629198.

Desrosières, Alain. 2013. Éléments d’histoire d’une grande école, l’ENSAE. In Alain Desrosières, Gouverner par les nombres : l’argument statistique II. Paris: Presses des Mines, 271-291. https://doi.org/10.4000/books.pressesmines.358.

Diewert, Walter Erwin. 1976. Exact and Superlative Index Numbers. Journal of Econometrics, 4(2): 115-145. https://doi.org/10.1016/0304-4076(76)90009-9.

Diewert, Walter Erwin. 1978. Superlative Index Numbers and Consistency in Aggregation. Econometrica, 46(4): 883-900. https://doi.org/10.2307/1909755.

Diewert, Walter Erwin. 2008. Index Numbers. In Steven Neil Durlauf and Lawrence Blume (eds), The New Palgrave Dictionary of Economics. Third Edition. London: Palgrave Macmillan, 6212-6243. https://doi.org/10.1057/978-1-349-95121-5_940-2.

Diss, Mostapha and Eric Kamwa. 2020. Simulations in Models of Preference Aggregation. Œconomia. History, Methodology, Philosophy, 10(2): 279-308. https://doi.org/10.4000/oeconomia.8251.

Économie et Statistique. 2019. Les Big Data dans l’indice des prix à la consommation. Économie et Statistique, 509: 1-93.

Einav, Liran and Jonathan Levin. 2014. Economics in the Age of Big Data. Science, 346(6210): 1243089. https://doi.org/10.1126/science.1243089.

Eurostat. 2014. Feasibility Study on the Use of Mobile Positioning Data for Tourism Statistics. Bruxelles: European Commission.

Eurostat. 2017. Practical Guide for Processing Supermarket Scanner Data. Bruxelles: European Commission.

Eurostat. 2018. Harmonised Index of Consumer Prices (HICP). Methodological Manual. Bruxelles: European Commission.

Faber, Benjamin and Thibault Fally. 2022. Firm Heterogeneity in Consumption Baskets: Evidence from Home and Store Scanner Data. The Review of Economic Studies, 89(3): 1420-1459. https://doi.org/10.1093/restud/rdab061

Feenstra, Robert and Matthew Shapiro. 2003a. Introduction. In Robert Feenstra and Matthew Shapiro (eds), Scanner Data and Price Indexes. Chicago: University of Chicago Press, 1-13.

Feenstra, Robert and Matthew Shapiro. 2003b. High-Frequency Substitution and the Measurement of Price Indexes. In Robert Feenstra and Matthew Shapiro (eds), Scanner Data and Price Indexes. Chicago: University of Chicago Press, 123-150.

Felgate, Melanie, Andrew Fearne, Salvatore DiFalco and Marian Garcia Martinez. 2012. Using Supermarket Loyalty Card Data to Analyse the Impact of Promotions. International Journal of Market Research, 54(2): 221-240. https://doi.org/10.2501/IJMR-54-2-221-240.

Fisher, Irving. 1922. The Making of Index Numbers. Boston: Houghton Mifflin.

Handbury, Jessie. 2021. Are Poor Cities Cheap for Everyone? Non-Homotheticity and the Cost of Living Across U.S. Cities. Econometrica, 89(6): 2679-2715. https://doi.org/10.3982/ECTA11738.

Hassani, Hossein and Emmanuel Sirimal Silva. 2015. Forecasting with Big Data: A Review. Annals of Data Science, 2(1): 5-19. https://doi.org/10.1007/s40745-015-0029-9.

Hausman, Jerry. 2003. Sources of Bias and Solutions to Bias in the Consumer Price Index. Journal of Economic Perspectives, 17(1): 23-44. https://doi.org/10.1257/089533003321164930.

Hawkes, William and Frank Piotrowski. 2003. Using Scanner Data to Improve the Quality of Measurement in the Consumer Price Index. In Robert Feenstra and Matthew Shapiro (eds), Scanner Data and Price Indexes. Chicago: University of Chicago Press, 17-38.

Hill, Robert. 2006. Superlative Index Numbers: Not all of them are Super. Journal of Econometrics, 130(1): 25-43. https://doi.org/10.1016/j.jeconom.2004.08.018.

Hox, Joop and Hennie Boeije. 2005. Data Collection, Primary vs. Secondary. In Kimberly Kempf-Leonard (ed.), Encyclopedia of Social Measurement. Amsterdam: Elsevier, 593-599. https://doi.org/10.1016/B0-12-369398-5/00041-4.

International Monetary Fund, International Labour Office, Statistical Office of the European Union (Eurostat), United Nations Economic Commission for Europe, Organisation for Economic Co-operation and Development and The World Bank. 2004. Consumer Price Index Manual: Theory and Practice. Washington: International Monetary Fund.

Jany-Catrice, Florence. 2019. L’indice des prix à la consommation. Paris: La Découverte. https://doi.org/10.3917/dec.catri.2019.01.

Jany-Catrice, Florence. 2020. A Political Economy of the Measurement of Inflation. The Case of France. London: Palgrave Macmillan. https://doi.org/10.1007/978-3-030-59940-9.

Jo, Taeho. 2019. Text Mining. Berlin: Springer. https://doi.org/10.1007/978-3-319-91815-0.

Kling, Rob and Suzanne Iacono. 1988. The Mobilization of Support for Computerization: The Role of Computerization Movements. Social Problems, 35(3): 226-243. https://doi.org/10.2307/800620.

Landefeld, Steve. 2014. Uses of Big Data for Official Statistics: Privacy, Incentives, Statistical Challenges, and Other Issues. Working Paper. Beijing: International Conference on Big Data for Official Statistics.

Leaver, Silvia and William Larson. 2001. Estimating Variances for a Scanner-Based Consumer Price Index. Working Paper. Washington D.C.: Bureau of Labor Statistics. https://www.bls.gov/osmr/research-papers/2001/pdf/st010130.pdf [retrieved 14/09/2023].

Leclair, Marie. 2019. Utiliser les données de caisses pour le calcul de l’indice des prix à la consommation. Courrier des statistiques, 3: 61-75.

Leclair, Marie, Isabelle Léonard, Guillaume Rateau, Patrick Sillard, Gaëtan Varlet, and Pierre Vernédal. 2019. Les données de caisse : avancées méthodologiques et nouveaux enjeux pour le calcul d’un indice des prix à la consommation. Économie et Statistique, 509: 13-29. https://doi.org/10.24187/ecostat.2019.509.1981.

Lehtinen, Aki and Jaakko Kuorikoski. 2007. Computing the Perfect Model: Why Do Economists Shun Simulation? Philosophy of Science, 74(3): 304-329. https://doi.org/10.1086/522359.

Léonard, Isabelle, Patrick Sillard, Gaëtan Varlet and Jean-Paul Zoyem. 2019. Écarts spatiaux de niveaux de prix entre régions et villes françaises avec des données de caisse. Économie et Statistique, 509: 51-72. https://doi.org/10.24187/ecostat.2019.509.1983.

Li, Lun. 2019. Constructing Location-Specific Price Indexes from Scanner Data. Working Paper. Peking University.

Lowe, Robin. 1998. Televisions: Quality Changes and Scanner Data. Paper presented at the fourth meeting of the International Working Group on Price Indices, Washington D.C., April 22-24.

Magnien, François and Jacques Pougnard. 2000. Les indices à utilité constante : une référence pour mesurer l’évolution des prix. Économie et Statistique, 335: 81-94. https://doi.org/10.3406/estat.2000.7523.

Marozzo, Fabrizio, Domenico Talia, and Paolo Trunfio. 2012. P2P-MapReduce: Parallel Data Processing in Dynamic Cloud Environments. Journal of Computer and System Sciences, 78(5): 1382-1402. https://doi.org/10.1016/j.jcss.2011.12.021.

McCreadie, Richard, Craig Macdonald, and Iadh Ounis. 2012. MapReduce Indexing Strategies: Studying Scalability and Efficiency. Information Processing & Management, 48(5): 873-888. https://doi.org/10.1016/j.ipm.2010.12.003.

Mehrhoff, Jens. 2019. La chaîne de valeur des données de caisse et des données moissonnées sur le web. Économie et Statistique, 509: 1-11. https://doi.org/10.24187/ecostat.2019.509.1980.

Moulton, Brent and Karin Moses. 1997. Addressing the Quality Change Issue in the Consumer Price Index. Brookings Papers on Economic Activity, 1997(1): 305-366. https://doi.org/10.2307/2534705.

National Research Council. 2002. At What Price? Conceptualizing and Measuring Cost-of-Living and Price Indexes. Washington: National Academy Press. https://doi.org/10.17226/10131.

Prud’homme, Marc, Dimitri Sanga, and Kam Yu. 2005. A Computer Software Price Index using Scanner Data. Canadian Journal of Economics, 38(3): 999-1017. https://doi.org/10.1111/j.0008-4085.2005.00313.x.

Raybaut, Alain. 2020. Analog Computing Simulations and the Production of Theoretical Evidence in Economic Dynamics. Œconomia. History, Methodology, Philosophy, 10(2): 309-329. https://doi.org/10.4000/oeconomia.8421.

Reinhart, Alex and Christopher Genovese. 2021. Expanding the Scope of Statistical Computing: Training Statisticians to be Software Engineers. Journal of Statistics and Data Science Education, 29(1): 7-15. https://doi.org/10.1080/10691898.2020.1845109.

Richardson, David. 2003. Scanner Indexes for the Consumer Price Index. In Robert Feenstra and Matthew Shapiro (eds), Scanner Data and Price Indexes. Chicago: University of Chicago Press, 39-66.

Richardson, Pete. 2018. L’apport des Big Data pour les prévisions macroéconomiques à court terme et « en temps réel » : une revue critique. Économie et Statistique, 505-506: 65-84. https://doi.org/10.24187/ecostat.2018.505d.1966.

Robin, Nicholas, Thilo Klein, and Johannes Jütting. 2015. Public-Private Partnerships for Statistics. Lessons Learned, Future Steps. Paris21 Working Paper, no. 8. Boulogne: PARIS21 Partnership in Statistics for Development in the 21st Century.

Sakarovitch, Benjamin, Marie-Pierre de Bellefon, Pauline Givord, and Maarten Vanhoof. 2018. Estimer la population résidente à partir de données de téléphonie mobile, une première exploration. Économie et Statistique, 505-506: 109-132. https://doi.org/10.24187/ecostat.2018.505d.1968.

Silver, Mick. 1995. Elementary Aggregates, Micro-Indices and Scanner Data: Some Issues in the Compilation of Consumer Price Indices. Review of Income and Wealth, 41(4): 427-438. https://doi.org/10.1111/j.1475-4991.1995.tb00136.x.

Silver, Mick and Saeed Heravi. 2001. Scanner Data and the Measurement of Inflation. The Economic Journal, 111(472): 383-404. https://doi.org/10.1111/1468-0297.00636.

Stapleford, Thomas. 2009. The Cost of Living in America: A Political History of Economic Statistics, 1880-2000. Cambridge: Cambridge University Press.

Stapleford, Thomas. 2011. Aftershocks from a Revolution: Ordinal Utility and Cost-of-Living Indexes. Journal of the History of Economic Thought, 33(2): 187-222. https://doi.org/10.1017/S1053837211000058.

Struijs, Peter, Barteld Braaksma, and Piet Daas. 2014. Official Statistics and Big Data. Big Data & Society, 1(1): 1-6. https://doi.org/10.1177/2053951714538417.

Tam, Siu-Ming and Frederic Clarke. 2015. Big Data, Official Statistics and Some Initiatives by the Australian Bureau of Statistics. International Statistical Review, 83(3): 436-448. https://doi.org/10.1111/insr.12105.

Tassi, Philippe. 2018. Les apports des big data. Économie et Statistique, 505-506: 5-15. https://doi.org/10.24187/ecostat.2018.505d.1963.

Taylor, Linnet, Ralph Schroeder, and Eric Meyer. 2014. Emerging Practices and Perspectives on Big Data Analysis in Economics: Bigger and Better or More of the Same? Big Data & Society, 1(2): 1-10. https://doi.org/10.1177/2053951714536877.

The Office for National Statistics. 2019. Using Alternative Data Sources in Consumer Price Indices (last accessed: December 2022). https://www.ons.gov.uk/economy/inflationandpriceindices/articles/usingalternativedatasourcesinconsumerpriceindices/may2019?fbclid=IwAR352i8ZE6Mo2UmmWcjITWlS0EWTWsRi_zAbdgM9pkENjWRtH1GoRPEon8w.

Tongur, Can. 2019. Mesure de l’inflation avec des données de caisse et un panier fixe évolutif. Économie et Statistique, 509: 31-47. https://doi.org/10.24187/ecostat.2019.509.1982.

Triplett, Jack. 2001. Should the Cost-of-Living Index Provide the Conceptual Framework for a Consumer Price Index? The Economic Journal, 111(472): 311-334. https://doi.org/10.1111/1468-0297.00633.

Triplett, Jack. 2003. Using Scanner Data in Consumer Price Indexes. Some Neglected Conceptual Considerations. In Robert Feenstra and Matthew Shapiro (eds), Scanner Data and Price Indexes. Chicago: University of Chicago Press, 151-162.

Triplett, Jack. 2006. Handbook on Hedonic Indexes and Quality Adjustments in Price Indexes. Paris: OECD Editions. https://doi.org/10.1787/9789264028159-en.

Vale, Steven. 2015. International Collaboration to Understand the Relevance of Big Data for Official Statistics. Statistical Journal of the IAOS, 31(2): 159-163. https://doi.org/10.3233/sji-150889.

van Veelen, Matthijs and Roy van der Weide. 2008. A Note on Different Approaches to Index Number Theory. The American Economic Review, 98(4): 1722-1730. https://doi.org/10.1257/aer.98.4.1722.

Wellman, Michael. 2020. Economic Reasoning from Simulation-Based Game Models. Œconomia. History, Methodology, Philosophy, 10(2): 257-278. https://doi.org/10.4000/oeconomia.8386.

Zamoya, Albert. 1996. Parallel and Distributed Computing Handbook. New York: McGraw-Hill.

Haut de page

Notes

1 According to Chessa and Griffioen (2019, 52), by 2019, only ten countries in Europe had explicitly integrated scanner data into the construction of price indexes. However, most European countries have recently shown their willingness to initiate this kind of project (Leclair et al., 2019, 20).

2 For instance, the chained Laspeyres index is used by the French NSI to measure the evolution of prices (Leclair, 2019, 66). Likewise, the Harmonised Index of Consumer Prices, computed in the European Union, is a “Laspeyres-type” price index, which includes the possibility of replacing some products in the basket of goods over time (notably for quality adjustments). Interestingly, the Bureau of Labor Statistics, in the United States, publishes two main price indexes: the “Consumer price index for all urban consumers” (CPI-U) and the “Chained-CPI-U” (Congressional Research Service, 2013, 3). The CPI-U is a Laspeyres index, while the Chained-CPI-U is a superlative index (a Törnqvist index to be precise, see footnote 5).

3 The book, supported by the Bureau of Labor Statistics, proposed a state of the art concerning the construction of price indexes in the United States. Despite the criticisms (also presented in this section), it notably encouraged NSIs to adopt the cost-of-living approach to the construction of price indexes.

4 For a detailed analysis of these non-price factors, see Jany-Catrice (2019).

5 The Törnqvist index is defined as:

Image 10000200000001D100000061657E0C0525C86317.png

6 The Boskin commission, also called “Advisory Commission to Study the Consumer Price Index”, was a commission constituted at the request of the United States Senate. Its objective was to identify different biases in the consumer price index computed at the time in the United States. The commission proposed several corrections to the current consumer price index (which overestimated inflation according to members of the commission), and also actively encouraged the adoption of cost-of-living indexes by NSIs. A useful history of these debates can be found in Stapleford (2009).

7 Likewise, “calculating a constant-utility index is a tricky affair: it is necessary to identify a utility function that rationalises the data. This problem is resolved formally by revealed preference theory. In practice, very detailed data on prices and quantities are required, which is made possible today by scanner data” (Magnien and Pougnard, 2000, 81; translated in Jany-Catrice, 2020, 111).

8 A similar point was formulated by members of the BLS in a presentation in front of the same group in 1997 (Bradley et al., 1997, 2). As underlined previously, the BLS adopted the cost-of-living approach for the construction of price indexes.

9 Some articles have described scanner data as a “flood of information” (Feenstra and Shapiro, 2003a, 6; Tongur, 2019, 47), an approach consistent with several definitions of big data used in the economic literature (Richardson, 2018, 68).

10 Scanner data are massive but not necessarily exhaustive (Mehrhoff, 2019, 11). For instance, services are not associated with a bar code, and consequently are not reflected in scanner data (Leclair et al., 2019, 15).

11 This tripartition partially overlaps the one proposed by Leclair (2019).

12 A relational database is a “database in which all data are represented in tabular form. The description of a particular entity is provided by the set of its attribute values, stored as one row or record of the table, called a tuple. Similar items from different records can appear in a table column. The relational approach supports queries that involve several tables by providing automatic links across tables” (Encyclopædia Britannica). A non-distributed relational database is a relational database which is processed on a unique computer.

13 We can note that The Office for National Statistics (2019) acquired more capacities than currently required, in prevision of the future use of web-scrapped data, which will be combined with scanner data for the construction of price indexes.

14 A cost argument can also be mentioned, as the MapReduce model allows horizontal scaling (McCreadie et al., 2012, 873), namely the possibility of adding many cheap low performant computers to the cluster in order to process statistical operations rather than having to invest in increasingly performant and costly computers.

15 Auer and Boettcher (2017, 13) list five solutions, but some of them are currently not used by NSIs.

16 “Text mining is defined as the process of extracting the implicit knowledge from textual data. Because the implicit knowledge which is the output of text mining does not exist in the given storage, it should be distinguished from the information which is retrieved from the storage” (Jo, 2019, 3).

17 Errors can be assessed through a manual examination of a sample extracted from classified scanner data.

18 Based on the distinction proposed by Hox and Boeije (2005).

19 Interestingly, if NSIs can be suspicious of the private sector, NSIs themselves can be accused of manipulating the production of official statistics. In this situation, scanner data can be collected by a third party and used to compute a consumer price index without the influence of political authorities. Then, the obtained results can be compared with the consumer price index proposed by official statistics. For instance, this is what happened in Argentina, which was accused of underestimating inflation. The use of retail prices displayed online for the computation of the consumer price index shown that inflation was three times higher than the one reported by the Argentinian NSI (Cavallo, 2013).

20 For a detailed analysis of the way quality of scanner data is assessed by NSIs, see Auer and Boettcher (2017).

21 An interesting issue, which is not addressed in this article, is the reconfiguration of the division of statistical labor resulting from the introduction of big data in NSIs, and the existence of more partnerships with private firms.

Haut de page

Pour citer cet article

Référence papier

Julien Gradoz, « The Computer and the Statistician. Computerization and the Mobilization of Scanner Data by National Statistical Institutes for the Construction of Price Indexes »Œconomia, 13-3 | 2023, 849-876.

Référence électronique

Julien Gradoz, « The Computer and the Statistician. Computerization and the Mobilization of Scanner Data by National Statistical Institutes for the Construction of Price Indexes »Œconomia [En ligne], 13-3 | 2023, mis en ligne le 01 septembre 2023, consulté le 16 mars 2025. URL : http://journals.openedition.org/oeconomia/15455 ; DOI : https://doi.org/10.4000/oeconomia.15455

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search