Navigation – Plan du site

AccueilNuméros4-3History of EconometricsInextricability of Autonomy and C...

History of Econometrics

Inextricability of Autonomy and Confluence in Econometrics

Inextricabilité de l’autonomie et de la confluence en économétrie
Duo Qin
p. 321-341

Résumés

Cet article examine comment la « confluence » et « l’autonomie », deux concepts clés introduits par Frisch vers 1930, ont disparu des manuels d’économétrie et pourquoi seulement quelques fragments de l’un et de l’autre ont survécu dans l’économétrie dominante. Nous relions cette disparition à un positionnement défectueux des manuels qui consiste à considérer de manière univoque les modèles théoriques a priori comme des modèles structurels. Nous montrons comment le couple confluence-autonomie pris comme un tout reflète la complexité de l’objectif des économètres, qui consiste à trouver et à valider des modèles robustes ayant une signification économique à partir d’un grand nombre de variables interdépendantes observables dans le monde réel et ouvert. Cette complexité rend essentiel, pour la recherche appliquée, de posséder un ensemble de règles de conception des modèles qui combine à la fois le raisonnement substantif a priori et l’approche statistique par les tests a posteriori. Celle-ci amène également à considérer comme prioritaire la recherche d’un bouclage minimum du modèle qui soit adéquat, recherche bien plus importante que le travail d’estimation des paramètres qui est proposé dans les manuels.

Haut de page

Texte intégral

1“Autonomy” and “confluence” were introduced by R. Frisch as two fundamental concepts when he embarked on the econometrics enterprise around 1930. However, neither concept has survived into textbook expositions of econometrics. Although aspects of the two can be found in various terminologies or techniques, such as multicollinearity, structural models versus their reduced forms, identification of simultaneous-equation models (SEM), none, or any combination, of them captures fully the essence of the two terms; nor are they taught as the core of econometrics.

2The present investigation looks into the history of how the gist of the two terms has gone largely lost and why only fragments of the pair have descended into textbooks. Specifically, the investigation starts from “autonomy” and its association with “structural relation” (see Section 1). Section 1 describes how Frisch defined “structural relation” versus “confluent relation,” and subsequently “autonomy” as an essential characteristic of structural relations in terms of parameter invariance. It also describes how the problem of empirical verification of “autonomy” became overlooked and bypassed by the assumption that autonomous structural relations/models were built and determined alone by a priori economic reasoning during the formalisation of econometrics by Haavelmo and the Cowles Commission (CC) in the 1940s, hereinafter referred to as the Haavelmo-CC programme. Unfortunately, this assumption was upheld unquestionably as the basis of textbook econometrics, which grew from and formed an essential part of the consolidation process of the Haavelmo-CC programme, e.g., see Qin (2013a, Chapter 1). But as evidence against the assumption accrued from applied studies, alternative modelling approaches emerged during the 1970s and 1980s, which challenged the monopoly of a priori economic reasoning, i.e., substantive reasoning, in producing structural models. However, the consequent introduction of data-based model selection rules encounters considerable resistance from modellers who have been drilled by textbook econometrics.

3Section 2 is focused on “confluence” and its particular link to “multicollinearity” as well as its mirror problem, omitted variable bias (OVB). It describes the polarised positions, during the post Haavelmo-CC era, from which multicollinearity and OVB have been conceived. While the textbook position is to treat the problems by choosing and devising estimators, the problems are often circumvented in practice by data-assisted model design and modifications, such as reparameterisation, which effectively forsakes the assumption that theory alone designates models as “structural” a priori. Such polarised positions actually reflect the duality of confluence. On the one hand, confluence highlights the risk of making erroneous inference based naïvely on statistical (data-based) association; on the other hand, it deems it logically inadequate to construct empirically viable structural models solely based on a priori substantive reasoning. The duality leads us to the next section, where “autonomy” and “confluence” are examined jointly.

4Section 3 maintains the importance of viewing “autonomy” and “confluence” as a unity of the opposites. It shows from a methodological viewpoint how over-simplistic it is to regard the pair as merely opposites but not a unity. The necessity of treating the pair as a unity entails the need for applied researchers to be actively engaged in empirical model design. The design process has to be guided by a set of model selection rules or criteria on the basis of both substantive association and statistical association. The need for a multiple-rule based model design process also destines the issue of minimum model closure as the primary and central task for applied modellers, a task much more important than that of estimator choice taught in textbooks. Moreover, the task implies the importance of combining a priori general theoretical postulates with specific sample data features, since data-instigated local or institutional components are often indispensable for the invariance of the theoretical component because of the dialectical unity of autonomy and confluence.

5The history of “autonomy” has been examined by Aldrich (1989) and the history of “confluence analysis” has been discussed by Hendry and Morgan (1989). The present investigation complements those two studies in several respects. First and obviously, the present investigation puts emphasis on the intimate relationship between the two concepts; it also covers a longer historical period and a wider scope than the two papers; but more importantly, it tries to associate the historical discussion with basic modelling issues and problems widely seen in current applied economic studies.

1. Structural Relation and Autonomy

6The origins of “structural relation” and “autonomy” can be traced to Frisch’s early works around 1930, as shown from the recent publications of Frisch’s 1930 Yale lecture notes (see Bjerkholt and Qin, 2010) and his lecture note at the 1931 Nordic Economic Meeting (see Qin, 2013b, 21, vol. I). In his 1930 Yale lecture notes, Frisch classified three types of relations in economic theory: “Structural, confluent and artificial” (Bjerkholt and Qin, 2010, 73), and defined a “structural relation” by two specific properties: (a) “It holds good identically in the variables involved” and (b) “It holds good even though we do not assume any particular side relation to be fulfilled.” (Ibid., 75) In the present parlance, property (a) means that the relation holds valid irrespective of what values the explanatory variables involved will take, and property (b) states that the relation holds unchanged even when the way in which the explanatory variables are generated has been altered. A relation which satisfied (a) but not (b) was defined as a “confluent relation” while one in which neither properties holds was a “artificial relation” (Ibid., 75). In a subsequent lecture given at the Nordic Economic Meeting in 1931, Frisch used “structure” for the “economic theoretical structure” characterising business cycles and “prime relations” as an alternative to what he had defined earlier:

A prime relation is a relation between a certain number of variables, the relation being such as to hold good identically in all the variables involved, and furthermore such as to hold good quite independently of whether certain other relationships are fulfilled or not. Thus economic prime relations are relations of an autonomous sort. (Qin, 2013b, 21, vol. I)

7It is notable that Frisch’s early descriptions of structural relations embrace two basic properties—parametric invariance and conditional independence—, with the former being much more explicit than the latter. The latter amounts to minimum model closure, a condition discussed in more details in Section 3 below. However, those descriptions were based on known theoretical relations with known parametric values. When it came to the situation where parametric values of a priori postulate theoretical relations had to be estimated through corresponding statistical relations, Frisch found himself in need of extending the concept of autonomy relevant to “the connection between statistical and theoretical relations” (1938, 407). In his memorandum on Tinbergen’s pioneering macroeconometric modelling work for the League of Nations, Frisch expressed his major concern over the problem of what is now widely known as identification of SEMs, i.e., the problem of whether statistical methods would be able to provide unique parametric estimates of certain theoretical relations built on “knowledge outside” what passively observed data could directly support. The exigency of tacking this identification problem was based on the conviction that those theoretical relations had “a higher degree of ‘autonomy’” than the relevant statistically estimable relations (section 4, Ibid.). Here, high degrees of autonomy became the key property to maintain the superiority of those under-identified structural relations in an SEM. But the very unidentifiable nature of those relations logically put the property beyond the realm of empirical verifiability. Frisch did not seem to be fully aware of this logic problem, since he maintained:

We must introduce some fundamentally new information. We do this by investigating what features of our structure are in fact the most autonomous in the sense that they could be maintained unaltered while other features of the structure were changed. This investigation must use not only empirical but also abstract methods. So we are led to constructing a sort of super-structure, which helps us to pick out those particular equations in the main structure to which we can attribute a high degree of autonomy in the above sense. The higher this degree of autonomy, the more fundamental is the equation, the deeper is the insight which it gives us into the way in which the system function, in short, the nearer it comes to being a real explanation. Such relations form the essence of ‘theory.’ (section 5, Ibid.).

8When Frisch’s concept of autonomy was expounded by Haavelmo in his 1944 monograph, verification of autonomy was essentially placed on “abstract” rather than “empirical” methods. In other words, “autonomy” was automatically attributed to models a priori constructed based solely on the substantive knowledge. This was best illustrated by his example of drivers having no need of a full understanding of the automobile mechanism (see Haavelmo, 1944; Section 8). As such, “autonomous” served as an intrinsically qualitative adjective, almost substitutive of “structural” in justifying the validity of theoretical constructs. The term granted these constructs implicitly with the property of adequate model closure. Haavelmo’s exposition not only helped to promote Frisch’s structural approach at the time, but also provided powerful arguments for theory-based counterfactual analyses much later on, e.g., see Heckman (2008).

9Haavelmo’s monograph was particularly influential in the CC research programme during the 1940s, which culminated at the publication of the CC Monograph 10 (see Koopmans, 1950). The CC programme cast its long-lasting impact in mainstream econometrics mainly via its two achievements: formalisation of identification conditions and development of maximum-likelihood based estimators for SEMs. Both achievements were made on the basis of a key assumption that the task of econometricians was to provide statistically optimal and consistent estimators for structural parameters of theoretical relations a priori formulated by economists. This assumption was implied in the CC research agenda at the time, namely that the issue of model choice was left aside (see Marschak, 1950, Section 2.6). The assumption effectively assigned a maintained status to those given theoretical relations. Consequently, the concept of autonomy was little delved into in the CC works, as the group shared with Haavelmo to regard autonomy as an intrinsic property of a priori formulated structural relations. That may explain why the concept is now absent in textbook econometrics, which has evolved narrowly around the techniques developed by the Haavelmo-CC programme.

  • 1 Notice that Frisch’s early conception of autonomy has remained virtually unknown till the recent de (...)

10Nevertheless, textbook econometrics has not ignored the problem that parameter estimates might not hold constant as sample data accrue. Chow test, time-varying parameter models and regime-switching models are arguably the most popular techniques widely taught in textbooks nowadays. The practical appeal of these techniques is plainly obvious. There is now ample evidence showing us how difficult it is to find empirical models which would withhold parameter constancy much longer outside the data samples from which the models are estimated; in other words, the forecasting performance of empirical models has remained a very weak point of econometric research. In terms of interpretation, observed failures of estimated parameter constancy have been commonly regarded as “structural” changes or shifts, following probably the classical Klein-Goldberger model (1955). They are seldom seen as destructive evidence against the autonomous property of a priori formulated theoretical models. Such a common belief can also be traced to a lack of consensus among economists as whether it is appropriate to characterise economic behaviour by time-invariant parametric relations. For example, Koopmans (1949, 89) wrote, “whether even the most autonomous behaviour equation can themselves be expected to have much persistence through time…is an empirical question of great social importance.” It is interesting to note from Koopmans’s remark how autonomy was seen as closely associated with substantive-knowledge based model closure (see Section 3) rather than with parametric constancy.1 Koopmans’s view was subsequently reinforced by Hurwicz’s definition of “structural” model (1962), see also Sims (1977).

  • 2 Doan et al. (1984) carried out recursive estimation of their VAR models and found that the paramete (...)

11The viewpoint of nonexistence of parametric constancy was probably most powerfully exploited by the Lucas critique (1976). The critique attributed the widespread forecasting failure of macro-econometric models in the wake of the 1973 oil crisis to the inability of constant-parameter models to capture the rational behaviour of economic agents in response to drastic policy changes. According to Lucas’s interpretation, macro-econometric models were doomed because of unavoidable parametric shifts caused by policy induced rational behavioural changes. This interpretation was, however, pointed out to be misconceived, e.g., see Sims (1982; 1986), and shown to lack conclusive empirical support, e.g., see Doan et al. (1984) as part of Sims’s endeavour to develop the VAR (Vector-AutoRegression) approach.2

  • 3 For a detailed description of the VAR and the LSE approaches, see Qin (2013a, Chapters 3 and 4).

12The Lucas critique also stimulated econometric research into utilising observed parametric shifts as a way to weed out empirically unsound theoretical models. The research was mainly carried out under the LSE (London School of Economics) approach.3 Specifically, policy changes were represented as parametric shifts in autoregressive models of the policy variables concerned. Of the theoretical models in which those policy variables are used as conditional variables, only the ones whose parameters were found to withhold the policy changes via recursive estimation were retained as data-congruent models, e.g., see Favero and Hendry (1992). Noticeably, such a model selection criterion effectively revived Frisch’s two specific properties for structural relations over half a century before, and extended Frisch’s idea by putting the properties into a statistically testable framework.

13In fact, the research in testing the Lucas critique was intimately related to the ongoing research, from the early 1980s, in conceptual formalisation of the LSE dynamic specification approach. The formalisation highlighted the importance, in general, of explicitly representing economically causal premises by conditional expectations, which underlay regression-based conditional models, e.g., see Hendry and Richard (1982; 1983), and, in particular, of refining the concept of exogeneity with respect to the conditions in causal model specification, e.g., see Sims (1977) and Engels et al. (1983). The latter paper was particularly interesting as it classified three types of exogenous conditions –weak, strong and super exogeneity. Roughly, a priori postulated conditional variables were regarded as weakly exogenous; they became strongly exogenous when the way that they drove the modelled variable was shown to be dynamically sequential rather than simultaneous; they became super exogenous when their corresponding parameters in explaining contemporaneously the modelled variable were shown to remain invariant to regime or policy shifts demonstrated by significant parameter shifts in the time-series autoregression of the exogenous variables concerned, e.g., see also Qin (2013a, Chapters 4 and 7). These exogeneity conditions have turned a priori causal premises into empirically testable hypotheses as much as possible, especially in relation to the implicitly assumed autonomous status of those premises.

14Remarkably, similar conceptual formalisation of causal linear stochastic dependencies was developed in the field of psychometrics around the same time, e.g., see Steyer (1984; 1988). On the basis of the conditional expectation representation of regression models, “weak” causal linear stochastic dependencies were associated to the “average condition” and “strong” causal linear stochastic dependencies to the “invariance condition” (Steyer, 1984). Since both conditions could be tested empirically in experimental or non-experimental studies, they were regarded as crucial in applied modelling, especially for the purpose of causal hypothesis testing. It is easily discernible how much the invariance condition shared in common with the concept of super exogeneity, even though the ideas appeared to have evolved independently between econometrics and psychometrics.

15Back in the field of econometrics, those formalised conditions for testing probabilistic causal premises helped raise modellers’ awareness of the importance of revising model specifications in respect with sample data features. Meanwhile, evidence from time-series modelling experiments by the VAR and the LSE approaches accrued and showed that parametric breaks in simple models were often rectifiable through augmenting those models with more dynamic terms and/or with institutional variables pertinent to the background information of data samples. Active empirical search for models with as robust parameter estimates as possible in turn encouraged modellers to go for models not only with white-noise residuals but also having those residuals as small as possible. The increasingly conscientious drive for minimising statistically well-behaved residuals helped reorient applied modelling practice towards a more and explicit data-driven approach, because it was plainly impossible to secure such residuals in general from fitting regression models based directly upon a priori causal premises no matter how elaborate they were formulated.

16The reorientation has naturally met with suspicion and resistance, especially from those drilled by the structural model tradition. The ad hoc nature of data-instigated model amendments is susceptive to the “measurement without theory” accusation, and could jeopardise the associated empirical models their “structural” status, e.g., see Sims (1991). Parameter invariance verified within data samples remains a tentative property and, in spite of routine updating revisions, macro-econometric models still carry a relatively high risk of forecasting failures. Noticeably major forecasting failures feed on the sentiment that the main purpose of econometric modelling should be policy analyses rather than forecasting. The sentiment has in turn strengthened the Haavelmo-CC structural model tradition, especially with the growth of model-based policy analyses using survey or unit record data over the last few decades. Since forecasting appears irrelevant in the context of those data samples, parametric invariance is of little consideration. Instead, the focus is to measure specific parameters associated with a few explanatory variables a priori destined to embody the policies concerned. Consequently, there is little desire to try and fit the modelled variables as closely as possible. In other words, there is relatively weak need for applied modellers to amend or augment a priori constructed models, except for concerns over multicollinearity, i.e., the problem of having the explanatory variables of interest correlated with each other or with other variables outside the realm of a priori policy interest.

2. Confluent Relation and Multicollinearity versus Omitted Variable Bias

  • 4 This may be best illustrated by his joint work with Waugh on methods for handling linear trends (19 (...)

17Historically, “multicollinearity” has stemmed from “confluence”, e.g., see Hendry and Morgan (1989). In the early 1930s, Frisch chose to use “confluence” to characterise conceptually the interdependent phenomenon between economic variables, whereas he termed it “multicollinearity” when he chose a multiple linear-equation system to represent the phenomenon mathematically. As quoted in the previous section, Frisch defined “confluent relations” as those which satisfied property (a) but not (b), i.e., relations which would not withstand regime shifts in the conditional variables using the present-day econometric parlance. In contrast to the dominantly a priori angle from which autonomy was discussed, Frisch approached the problem of confluent relations from decisively the a posteriori perspective. In particular, he was concerned with the fact that many economic variables were obviously trended in the time-series sample format,4 making them more likely to be significantly multicollinear. It became therefore important to devise statistical methods for selecting variables to form “a closed set” such that those “superfluous variables” were de-selected while all the relevant variables were included (Bjerkholt and Qin, 2010, Section 3.2). Notice here that Frisch’s discussion on “a closed set” indicates that he felt strongly the need for having statistical criteria and means to assist model design in order to achieve adequate model closure (see Section 3 for more discussion). Frisch actually classified the data information of the modelled variable into three parts: “systematic variations,” the part which was expected to be captured by a well formulated structural model, “accidental variations”, i.e., model residuals due to omissions of many minor factors, and “disturbances,” i.e., shocks due to a single or particular factors of rare occurrence and hence were absent in the model (Ibid., 165). Since it was often difficult to find and verify such well-formulated structural models, Frisch was seriously against the common use of R2 as a summary statistic after running regressions because it was not an appropriate measure of the “systematic variations” (Ibid., 167), see also Bjerkholt (2013). The prevalent confluence or significant “multicollinearity” of trended economic variables could easily produce high R2 statistics.

  • 5 Haavelmo (1950) explained Frisch’s confluence analysis mainly with respect to multicollinearity. In (...)

18However, the journey to devise appropriate statistical methods to assist model selection has proved to be extremely tortuous, e.g., see Qin (2013a, Chapter 9). Frisch’s own device of “bunch maps” (1934) was soon overshadowed by the wide adoption of his structural approach through the consolidation of the Haavelmo-CC programme. At least, two aspects of the consolidation made Frisch’s “bunch maps” exploration redundant. The first was the adoption of an abstract SEM as the basic form of structural models. The form appeared to be an adequately general representation of an economy as a closed system.5 The second was the wide acceptance of a priori economic reasoning, i.e., substantive association, being the sole criterion to differentiate structural relations from confluent relations in textbook econometrics. Consequently, the dichotomy of structural versus confluent relations became replaced by that of a structural model versus its “reduced form,” as the connection between the two became conceptually narrowed down to the identification conditions, i.e., the conditions under which a priori structural parameters in SEMs were uniquely estimable from available data. Moreover, since the identification conditions effectively disentangled the aspect of simultaneous causality out of multicollinearity, its connotation has dwindled to merely “collinearity,” i.e., correlations among explanatory variables within a single-equation model, although the phenomenon remains to be referred to as multicollinearity in textbooks, e.g., see Hendry and Morgan (1989).

  • 6 A good example can be found from Lipsey’s (1960) model specification experiments to measure the une (...)
  • 7 In statistics, those omitted variables are commonly referred to “confounding” variables.

19It should be noted that the issue of collinearity, or multicollinearity as better known in textbook econometrics, was also left largely out of the Haavelmo-CC programme. The issue was dealt with from two almost polarised stands in the post Haavelmo-CC era, so much so that Farrar and Glauber (1967, 96) described the situation as “schizophrenic,” see also Qin (2013a, Chapter 7). For those theoretically minded econometricians, multicollinearity was seen as a non-experimental sample data problem which weakened statistical evidence. It thus reinforced their conviction of relying on a priori formulated theoretical models. Taking these models as “structural,” a general way of mitigating multicollinearity was to use more elaborate estimators than the OLS (ordinary least squares) estimator, such as ridge regression estimator. This stand has occupied the authoritative position through textbook econometrics, e.g., see Belsley et al. (1980), Judge et al. (1980) and Hill and Adkins (2001). But for applied modellers, especially those who paid close attention on the link between data features and model specification, multicollinearity was tackled as a specific model design problem, and could be circumvented through a careful choice of the combination of explanatory variables so as to keep them as uncorrelated with each other as possible.6 For those modellers, what posed a more serious threat was OVB, e.g., see Griliches (1957), i.e., the problem of biased OLS estimates of the parameters of interest due to correlation of their corresponding explanatory variables with other variables which are excluded from the regression under consideration.7 An obvious way to prevent OVB is to include those originally omitted variables. But that implies an empirical modelling route to start from models which include as many explanatory variables as possible. The route clearly clashes with the textbook position of letting a priori theory dictate model specification, unless the specific theoretical relation of interest happens to be so general as to include all the possible explanatory variables. The fact that there are widespread worries over OVB in applied research indicates that few of a priori theoretical models are formulated generally enough. In other words, theoretical models frequently lack the property of conditional independence to justify model closure with respect to sample data information. Somehow, textbook econometrics remains its blind eye to this fact.

20During the reformative period of econometrics of the 1970s-1980s, multicollinearity and its mirror problem of OVB were actually examined more closely from the angle of model specification than estimation. For example, multicollinearity was regarded as essentially a “problem of interpreting multidimensional evidence” from non-experimental data in a “parameter-by-parameter fashion” by Leamer (1973, 379) when he was developing the Bayesian model specification approach (see Qin, 2013a, Chapter 2). Noticeably, Leamer’s interpretation revives Frisch’s conception of the term as a technical representation of confluence, i.e., the interdependent nature of economic variables. As for the VAR and the LSE approaches, both advocated the strategy to start empirical models from dynamically general models with white-noise residuals of the smallest possible standard errors. Such a strategy effectively reduces significantly the OVB risk at the initial modelling stage. Furthermore, the LSE approach proposed to tackle multicollinearity due to the inclusion of many mutually correlated regressors by reparameterisation, which was essentially done through transforming the explanatory variables into two types –the short-run and the long-run variables such that the model became transformed into an error-correction (EC) model, e.g., see Davidson et al. (1978). Consequently, multicollinearity became largely confined into the group of long-run variables which constituted the EC mechanism because of a salient economic feature of these variables –they were discernibly tended and tended to co-trend stochastically, a feature that worried Frisch decades before as mentioned earlier. Interestingly, the confinement now delimits Frisch’s “confluence” as essentially a long-run static feature of those co-trending variables. This co-trending feature was exploited by the cointegration theory in the 1980s, see Granger (1983) and Engle and Granger (1987), and the theory has since won a great deal of popularity among applied macro modellers.

21However, those reformative ideas and approaches have not convinced the traditional camp. The VAR approach has been commonly regarded as “astructural” and the LSE approach as committing too much data mining sins to produce models which could be convincingly qualified as “structural.” Moreover, the long-run or the EC component in the model by the LSE approach, the component which is seen as closest to a priori theory, tends to yield the weakest empirical evidence. Multicollinearity still haunts multivariate cointegrating relations, the estimated EC effect is often small and susceptive to parameter constancy failures during turbulent economic periods. To circumvent the weakness, some macro modellers choose to use VAR models which are built purely on detrended variables, although such models have not yet produced superior forecasting records to those by models including the trend components. VAR-based macro-econometric models nowadays are thus used more often for policy-driven simulating counterfactual analyses than for forecasting.

22As described previously, it is in micro-econometric research that models have been dominantly developed for policy analyses and used in counterfactual simulations, e.g., see Heckman (2005; 2008). The policy-oriented objective has focused modellers’ attention narrowly on estimating the effect of a few explanatory variables representing the policy concerned. It appears irrelevant to try and fit the modelled variables as fully as possible under the circumstance. As a result, OVB becomes one of the most undesirable nuisances, especially when those policy related variables are implicitly trended. Another issue of grave concern is endogeneity, or rather, the problem of “endogenous regressors” (e.g., see Angrist et al., 1996), since micro data sets are mostly cross-section surveys and the relevant theoretical relations are of the static type. Following the Haavelmo-CC SEM programme, microeconometric modellers accept widely the criterion of substantive association as the rule for labelling what structural models are as against “reduced-form” models, and also the route of devising more complicated estimators than the OLS as the way to get a general remedy for both OVB and simultaneity “bias,” e.g., see Angrist and Pischke (2009) and Wooldridge (2010). Relatively little attention is paid to statistical diagnoses of model adequacy, e.g., see Pagan and Vella (1989) and Cameron and Trivedi (2005, Chapter 2).

23The methodological mode in microeconometrics can be best illustrated by a boom in applying instrumental variable (IV) estimators to single-equation based theoretical models. Those IV estimators have been developed out of the strong conviction that they could serve as the magic “stone” to “kill” both OVB and simultaneity “bias,” e.g., see Angrist and Pischke (2009). However, IV-based empirical model results are often weak, inconclusive and sensitive to specification variations, so much so that heated debates have arisen concerning particularly the “structural” or “causal” verification of empirical model results, e.g., see Deaton (2010), Heckman (2010) and Imbens (2010). In fact, IV estimators live by actively exploiting multicollinearity via OVB, because they depend crucially on correlation between the a priori postulated explanatory variables of interest and other theoretically uninteresting, and frequently omitted, variables. Essentially, the IV approach helps to enhance the chance of getting those a priori postulated variables statistically significant by means of implicitly rejecting their conditional status and attributing the status to other collinear but non-causal variables, e.g., see Qin and Zhang (2013). From the perspective of Frisch’s discussion on confluence, the IV approach effectively resorts to “confluent relations” for the statistical verification of “structural relations.” The approach thus re-entangles model identification with collinearity in multivariate conditional models, the two facets of Frisch’s confluence which were largely disentangled by the Haavelmo-CC programme well over half century ago.

24Recently reoccurring debates over the causal interpretation of microeconometric models and programme evaluation models imply deficiency in model design and selection criteria, particularly statistical criteria which would help differentiate a posteriori “structural” relations from “confluent” relations defined by Frisch. For example, the “counterfactual” pretext is widely used as a synonym for “autonomous” to justify whatever a priori postulated models as being “structural” in micro modelling research. But the “invariance condition” is almost never checked and verified a posteriori. Without such verification, the “counterfactual” label can only serve as a thin excuse for modellers to treat their preferred a priori postulates as “maintained” rather than testable “null” hypotheses. Logically, model-based counterfactual analyses are predicated on any highly partial a priori postulated relations being statistically differentiated from confluent relations. This brings us back to the need for more reliable a posteriori means to verify which theoretical postulates have indeed the property of “autonomous” structure.

3. Inextricability of Autonomy and Confluence: Minimum Model Closure

25The previous two sections trace respectively the history of two Frisch’s concepts, “autonomy” and “confluence.” The history shows us that the basic issues underlying the two have not really been fully resolved or even widely understood. Although the two are seemingly opposite concepts, the context in which each has evolved and been analysed shows us that they are actually intimately related. The inextricability of the pair explains probably why it is difficult to fully resolve the basic issues underlying the pair and make them widely understood. Hence, this section is devoted to the inextricability. Specifically, I venture the following observations.

26First of all, the gist of the two concepts relates closely to model specification and design and it is methodologically wrong to regard them narrowly and separately as pertinent to issues within the estimation process of a priori fixed theoretical models. Historically, that may help explain why Frisch remained extremely cautious in adopting estimation methods directly from mathematical statistics, a much more reserved position than that taken by either Tinbergen or Haavelmo. It may also explain why neither “autonomy” nor “confluence” has survived textbook econometrics and what has descended from them is all closely related to parameter estimation techniques, e.g., Chow test for parameter constancy and variance inflation factors for detecting multicollinearity.

27Furthermore, the single rule of substantive association is far from adequate in determining whether a priori postulates are indeed empirically “structural,” a point already forcefully argued by Hendry (1995a) in the context of business cycle modelling research. In general, if we take into consideration the fact that virtually all a priori postulates are formulated on the ceteris paribus condition which non-experimental sample data never meet, e.g., see Boumans and Morgan (2001), we should easily recognise that empirically viable structural relations should be more comprehensive than a priori postulated ones in order to adequately reflect confluence on the whole. This empirical requirement is particularly crucial for implementing statistical tests on a priori theoretical postulates. It is mainly due to this requirement that both the VAR and the LSE approaches have advocated the dynamically general→simple modelling strategy in opposition to the textbook structural→reduced-form modelling tradition. In fact, this requirement is also reflected in the ad hoc practice by many micro modellers who try and play with various control variables in addition to those a priori given explanatory variables of interest during their modelling experiments. Unfortunately, such practice has often been misconceived as committing the “data mining” sin following Lovell (1983), although the statistical theory for conducting coherently a series of hypothesis tests sequentially within a general→specific framework was provided in Anderson (1971) over four decades ago.

28In retrospect, the necessity of embedding theoretical postulates in empirically more general models tells us that it is a misconception to polarise structural relations from confluent relations. Rather, the two should be considered as a unity of opposites. The unity is empirically best illustrated by the reparameterisation process of a multivariate dynamic model (a model of confluent relations), e.g., a VAR, into an EC model in the LSE approach. The EC model is regarded as closest to an empirically verified structural model because it is not only data permissive but also economically interpretable at the individual parameter level. Such a model can only be built systematically upon a set of model design and selection criteria based on both data and theory information, e.g., see Hendry (1995b).

29Two data-based criteria deserve particularly attention as they play a pivotal role in combating the deficiency of the traditionally theory-monopolist model design rule. The first is to choose models which are as compactly built as possible and also with as small white-noise residuals as possible. This rule is referred to as “parsimonious encompassing” under the LSE approach. It is epistemologically important to realise that the various statistical criteria which are targeted at minimising well-behaved residuals are not merely for securing statistically optimal properties of the parameter estimates within a model, but also for reducing the risk of OVB as well as of omitted-variable induced parameter inconstancy as much as possible. Moreover, it is always desirable to conduct empirical researches in a progressive way such that newly developed models should enhance the explanatory capacity of existing models (phrased as “rival model encompassing” under the LSE approach). It should also be noted that the criterion of minimising residuals in model design is shown to be necessary for validating the interpretation of any regression-based models as conditional expectations, e.g., see Zimmerman (1976) and Steyer (1988). Actually, the validation was implied in Wold’s proximity theorem established more than half century ago (see Wold and Juréen, 1953, 37-38). The theorem states roughly that the OLS estimator approaches optimality as the error terms of an SEM meet the criterion. In other words, the issue of choosing estimators becomes trivial when the regression model is shown to be data-permissive conditional expectations. For applied research, this criterion implies a rejection of the estimator-centred textbook econometrics as strategically missing the point.

30The second criterion is invariance, or statistical constancy of the parameter estimates of interest. This criterion is particularly indispensable for conducting policy-related counterfactual analyses using fitted econometric models. As pointed out above, this criterion underscores the co-existing side of confluent versus structural relations instead of their opposing side. Since valid structural relations are effectively a subset of confluent relations, the “average condition” is not strong enough to verify the former and “invariance” is thus needed as a stronger condition, as already mentioned in Section 1. The property of “super exogeneity” is thus essential for justifiably applying any estimated models for policy analyses. It is a mistake to regard “invariance” simply as an ad hoc “add on” condition in this respect (e.g., see Cartwright, 2006). The condition lends vital empirical support for the development of “transcendental analysis” in economics, see Lawson (2003, Chapter 2). The co-existence of confluent and structural relations also implies that empirically observed failures of parametric invariance should not be immediately interpreted as “structural” breaks or shifts in the real world. Rather, they often indicate model specification inadequacy in representing significantly confluent effects. Autonomy is frequently embedded in confluent relations due to the highly interdependent nature of many economic variables of interest. Modellers who stick to theory-based regression models and expect sophisticated estimators to work wonder when the variables of interest are known to correlate significantly with variables disregarded by theory are actually burying their heads in the sand.

31The pivotal role of the two data-based criteria lies in assisting modellers to target their empirical model design at finding models which are not only data-permissive but also self-autonomous. Logically, it is obviously premature to choose parameter estimators before the model to be estimated can be regarded as a self-autonomous unit. It simply makes no sense to go for estimators consistent with mis-specified or inadequately designed models. Moreover, the precision gain achievable through choice of better estimators is usually of a far smaller order of magnitude than the gain achievable through improved model designs. This has been repeatedly shown from numerous empirical model experiments for decades. From a more formal methodological viewpoint, the exigency for designing empirical models as data-permissive and self-autonomous units highlights the fundamental importance in assessing what should be the minimum limit for model closure, a vital prerequisite for conducting valid analytical statistics. Such a closure entails a number of assumptions summarised by Olsen and Morgan (2005, 256):

  1. That there are repeated, regular relations between independent and dependent variables of a constant conjunction form (rather than say, demi-regular form).

  2. That the existence of regularity is sufficient to indicate a relation.

  3. That the absence of regularity is sufficient to indicate no relation.

  4. That identification of this relation is either sufficient or necessary to provide grounds for adequate description and/or explanation and/or forecasting of events.

  5. That this relation is for all-intents-and-purposes enduring or intransitive reaffirming both 4 and 1.

  6. That in terms of the focus on 1-5 the system appears to be closed.

32In fact, the issue of model closure conditions addressed by Olsen and Morgan has long been identified as epistemologically fundamental for sustaining basic economics research by scholars of economic thought and methodology, e.g., see Lawson (1989; 2003) and also Mearman (2006). Among other things, their discussions highlight the intimate relationship between regular or intrinsically stable features essentially sought after by economists and closed model systems that they mentally use to capture those features. In view of the highly open and symbiotic economic reality, it is primarily important for economic modellers to be fully aware of the conditions or assumptions needed as well as the risk involved when they engage themselves seriously in closed-system research. In the event that the subject matter involves specific event judgements, as is often the case with policy-driven empirical studies, it is natural to expect that available a priori theoretical models fall short of the minimum model closure conditions needed for the specific situation under investigation. Applied modellers should therefore take on the search for smallest possible adequate models as their primary task.

33In retrospect, Frisch’s device of bunch maps is actually aimed at such a task. Unfortunately, his attempt has been long forgotten in textbook econometrics. Systematic studies on the importance of this task have remained few and far between, as well as way apart from mainstream econometrics. For example, Magnus and Morgan (1999, Part II) studied this task and referred to it as the use of “tacit knowledge” by experienced modellers; Swann (2006, ix) argued vehemently for its importance and referred to it as the “vernacular” property of good applied econometric works. Given the prevalence of economic confluence, a fruitful route for considering the vernacular element in empirical model designs is to focus on introducing those local or institutional factors which exhibit, given sample data, significant collinearity with the explanatory variables of a priori interest. More often than not, the introduction can rectify observed breaks in the estimated parameters of interest in regression models excluding such an element. Successful examples can be found from the introduction in the UK aggregate consumption model of the inflation effects to cover largely the oil-shock impact (see Davidson et al., 1978), and the introduction in the UK money demand model of a “learning-adjusted own-interest rate” as a proxy of the financial innovation (see Hendry, 1995b, Chapter 16).

34Clearly, the contingency of the strategy to exploit the vernacular component goes against the widely held ethos of using econometric techniques as universal solvents following the Haavelmo-CC tradition. Apart from difficulties of introducing it successfully into classroom teaching, the strategy has a “tentative” nature as it depends upon the sample-based invariance property. The risk of post-sample model breakdown remains unavoidably real no matter how constant the parameter estimates are shown within the sample period, due naturally to the highly open and mutable economic reality. Such situations actually reflect the very essence of the unity of confluence and autonomy. They are described generally as “demi-regularities” by Lawson (2003, Chapter 4). Unfortunately, neither “confluence” nor “autonomy” is of essential concern in textbook econometrics, nor are methodological arguments by economic thinkers of critical realism. Since most of academic studies involving empirical modelling are conducted for purposes no more than educational and political-economy oriented persuasions, it is unsurprising that the Haavelmo-CC tradition still prevails. Thanks to the imprecise and largely non-experimental nature of economic data, the autonomy of the practice of theory-monopolist model closure lives on in a safe distance from the open and confluent reality. The importance of Frisch’s notions of “confluence” and “autonomy” will remain to be unheeded, unless there is a paradigm shift in econometric modelling research and education—to forsake the naïve pretence that a priori substantive reasoning alone presents us correctly built models “about which nothing was unknown except parameter values” Hansen (2004, 276).

I wish to thank Olav Bjerkholt, David Hendry, Mary Morgan, editors of Œconomia and two anonymous referees for their valuable comments, criticisms and suggestions. Errors that may yet remain are obviously my own.

Haut de page

Bibliographie

Aldrich, John. 1989. Autonomy. Oxford Economic Papers, 41(1): 15-34.

Anderson, Theodore W. 1971. The Statistical Analysis of Time Series. New York, NY: John Wiley & Sons.

Angrist, Joshua D., Guido Imbens, and Donald Rubin. 1996. Identification of Causal Effects Using Instrumental Variables. Journal of the American Statistical Association, 91(434): 444-455.

Angrist Joshua D. and Jörn-Steffen Pischke. 2009. Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton, NJ: Princeton University Press.

Belsley, David A., Edwin Kuh, and Roy E. Welsch. 1980. Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York, NY: John Wiley & Sons.

Bjerkholt, Olav. 2013. Promoting Econometrics Through Econometrica 1933-39. Presented at ESEM-67, University of Gothenburg, August 2013. SSRN: http://dx.doi.org/10.2139/ssrn.2401990.

Bjerkholt, Olav and Duo Qin (eds.). 2010. A Dynamic Approach to Economic Theory: The Yale Lectures of Ragnar Frisch in 1930. London: Routledge.

Boumans, Marcel and Mary Morgan. 2001. Ceteris Paribus Conditions: Materiality and the Application of Economic Theories. Journal of Economic Methodology, 8(1): 11-26.

Cameron, A. Colin and Pravin K. Trivedi. 2005. Microeconometrics: Methods and Applications. Cambridge: Cambridge University Press.

Cartwright, Nancy. 2006. Where is the Theory in our “Theories” of Causality? The Journal of Philosophy, 103(2): 55-66.

Davidson, James E.H., David F. Hendry, Frank Srba, and Stephen Yeo. 1978. Econometric Modelling of the Aggregate Time-Series Relationship Between Consumers’ Expenditure and Income in the United Kingdom. Economic Journal, 88(December): 661-692.

Deaton, Angus. 2010. Instruments, Randomization, and Learning about Development. Journal of Economic Literature, 48(2): 424-455.

Doan, Thomas, Robert Litterman, and Christopher A. Sims. 1984. Forecasting and Conditional Projection Using Realistic Prior Distributions. Econometric Reviews, 3(1): 1-100.

Engle, Robert F. and Clive W.J. Granger. 1987. Co-Integration and Error Correction: Representation, Estimation and Testing. Econometrica, 55(2): 251-276.

Engle, Robert F., David F. Hendry, and Jean-Francois Richard. 1983. Exogeneity. Econometrica, 51(2): 277-304.

Farrar, Donald E. and Robert R. Glauber. 1967. Multicollinearity in Regression Analysis: The Problem Revisited. The Review of Economics and Statistics, 49(1): 92-107.

Favero, Carlo, and David F. Hendry. 1992. Testing the Lucas Critique: A Review. Econometric Reviews, 11(3): 265-306.

Frisch, Ragnar. 1934. Statistical Confluence Analysis By Means Of Complete Regression Systems. Oslo: Universitets Økonomiske Institutt.

Frisch, Ragnar. 1938. Autonomy of Economic Relations: Statistical versus Theoretical Relations in Economic Macrodynamics. Reproduced by University of Oslo in 1948 with Tinbergen’s comments, Memorandum. Oslo. Published in Hendry and Morgan (eds.), 1995, The Foundations of Econometric Analysis, Cambridge: Cambridge University Press.

Frisch, Ragnar and Frederick V. Waugh. 1933. Partial Time Regression as Compared with Individual Trends. Econometrica, 1(4): 378-401.

Granger, Clive W.J. 1983. Cointegrated Variables and Error Correction Models. UCSD Discussion Paper 83-13a.

Griliches, Zvi. 1957. Specification Bias in Estimates of Production Functions. Journal of Farm Economics, 39(1): 8-20.

Haavelmo, Trygve. 1944. The Probability Approach in Econometrics. Econometrica, 12(supplement): iii-vi; 1-115.

Haavelmo, Trygve. 1950. Remarks on Frisch’s Confluence Analysis and its Use in Econometrics. In T.K. Koopmans (ed.), Statistical Inference in Dynamic Economic Models. Cowles Commission Monograph 10. New York, NY: Wiley, 258-265.

Hansen, Lars Peter. 2004. An Interview with Christopher A. Sims. Macroeconomic Dynamics, 8(2): 273-294.

Heckman, James J. 2005. The Scientific Model of Causality. Sociological Methodology, 35(1): 1-97.

Heckman, James J. 2008. Econometric Causality. International Statistical Review, 76(1): 1-27.

Heckman, James J. 2010. Building Bridges between Structural and Program Evaluation Approaches to Evaluating Policy. Journal of Economic Literature, 48(2): 356-398.

Hendry, David F. 1995a. Econometrics and Business Cycle Empirics. The Economic Journal, 105(433): 1622-1636.

Hendry, David F. 1995b. Dynamic Econometrics. Oxford: Oxford University Press.

Hendry, David F. and Mary S. Morgan. 1989. A Re‑Analysis of Confluence Analysis. Oxford Economic Papers, 41(1): 35‑52.

Hendry, David F. and Mary S. Morgan (eds.). 1995. The Foundations of Econometric Analysis. Cambridge: Cambridge University Press.

Hendry, David F. and Jean-Francois Richard. 1982. On the Formulation of Empirical Models in Dynamic Econometrics. Journal of Econometrics, 20(1): 3-33.

Hendry, David F. and Jean-Francois Richard. 1983. The Econometric Analysis of Economic Time Series. International Statistical Review, 51(2): 111-148.

Hill, R. Carter and Lee C. Adkins. 2001. Collinearity. In Badi H. Baltagi (ed.), A Companion to Theoretical Econometrics. Oxford: Blackwell, 256-278.

Hurwicz, Leonid. 1962. On the Structural Form of Interdependent Systems. In E. Nagel, P. Suppes and A. Tarski (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the 1960 International Congress. Stanford, CA: Stanford University Press, 232–239.

Imbens, Guido. 2010. Better LATE than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009). Journal of Econometrics, 156(2): 399-423.

Judge, George G., William E. Griffiths, R. Carter Hill, and Tsoung-Chao Lee. 1980. The Theory and Practice of Econometrics. New York, NY: John Wiley & Sons.

Klein, Lawrence R. and Arthur S. Goldberger. 1955. An Econometric Model of the United States 1929-1952. Amsterdam: North-Holland.

Koopmans, Tjalling C. 1949. Reply to Rutledge Vining. Review of Economics and Statistics, 31(2): 86-91.

Koopmans, Tjalling C. (ed.). 1950. Statistical Inference in Dynamic Economic Models. Cowles Commission Monograph 10. New York, NY: Wiley.

Lawson, Tony. 1989. Realism and Instrumentalism in the Development of Econometrics. Oxford Economic Papers, 41(1): 236-258.

Lawson, Tony. 2003. Reorienting Economics. London: Routledge.

Leamer, Edward E. 1973. Multicollinearity: A Bayesian Interpretation. Review of Economics and Statistics, 55(3): 371-380.

Lipsey, Richard G. 1960. The Relationship between Unemployment and the Rate of Change of Money Wage Rates in the U.K., 1862-1957. Economica, 27(1): 1-31.

Lovell, Michael C. 1983. Data Mining. The Review of Economics and Statistics, 65(1): 1-12.

Lucas, Robert E. 1976. Econometric Policy Evaluation: A Critique. In K. Brunner and A. Meltzer (eds.), Stabilization of the Domestic and International Economy. Amsterdam: North-Holland, 7-29.

Magnus, Jan R. and Mary S. Morgan. 1999. Methodology and Tacit Knowledge: Two Experiments in Econometrics. New York, NY: John Wiley & Sons.

Mäki, Uskali (ed.). 2002. Fact and Fiction in Economics: Models, Realism and Social Construction. Cambridge: Cambridge University Press.

Marschak, Jakob. 1950. Statistical Inference in Economics: An Introduction. In T.C. Koopmans (ed.), Statistical Inference in Dynamic Economic Models. New York: John Wiley & Sons, 1-50.

Mearman, Andrew. 2006. Critical Realism in Economics and Open-Systems Ontology: A Critique. Review of Social Economy, 64(1): 47-75.

Pagan, Arian and Frank Vella. 1989. Diagnostic tests for models based on individual data: A survey. Journal of Applied Econometrics, 4(S): S29-S59.

Olsen, Wendy and Jamie Morgan. 2005. A Critical Epistemology of Analytical Statistics: Addressing the Sceptical Realist. Journal for the Theory of Social Behaviour, 35(3): 255-284.

Qin, Duo. 2013a. A History of Econometrics: The Reformation from the 1970s. Oxford: Oxford University Press.

Qin, Duo (ed.). 2013b. The Rise of Econometrics. London: Routledge.

Qin, Duo and Yanqun Zhang. 2013. A History of Polyvalent Structural Parameters: The Case of Instrumental Variable Estimators. SOAS Economic Working Papers 183.

Sims, Christopher A. 1977. Exogeneity and Causal Orderings in Macroeconomic Models. In New Methods in Business Cycle Research: Proceedings from a Conference, Federal Reserve Bank of Minneapolis, 23-43.

Sims, Christopher A. 1982. Policy Analysis with Econometric Models. Brookings Papers on Economic Activity, 1: 107-152.

Sims, Christopher A. 1986. Are Forecasting Models Usable for Policy Analysis. Quarterly Review of the Federal Reserve Bank of Minneapolis, Winter: 2-16.

Sims, Christopher A. 1991. Comments: ‘Empirical Analysis of Macroeconomic Time Series: VAR and Structural Models’ by Michael P. Clement and Grayham E. Mizon. European Economic Review, 35(4): 922-932.

Steyer, Rolf. 1984. Causal Linear Stochastic Dependencies: The Formal Theory. In E. Degreef and J. van Buggenhaut (eds.), Trends in Mathematical Psychology. Amsterdam: North-Holland, 317-346.

Steyer, Rolf. 1988. Conditional Expectations: An Introduction to the Concept and its Applications in Empirical Sciences. Methodika, 2(1): 53-78.

Swann, G.M. Peter. 2006. Putting Econometrics in Its Place: A New Direction in Applied Economics. Cheltenham: Edward Elgar.

Wold, Herman O.A. and Lars Juréen. 1953. Demand Analysis: A Study in Econometrics. New York, NY: John Wiley & Sons.

Wooldridge, Jeffrey M. 2010. Econometric Analysis of Cross Section and Panel Data. Boston, MA: MIT Press, second edition.

Zimmerman, Donald W. 1976. Test Theory with Minimal Assumptions. Educational and Psychological Measurement, 36(1): 85-96.

Haut de page

Notes

1 Notice that Frisch’s early conception of autonomy has remained virtually unknown till the recent decade.

2 Doan et al. (1984) carried out recursive estimation of their VAR models and found that the parameter estimates stayed relatively constant during a period when there were known policy shocks.

3 For a detailed description of the VAR and the LSE approaches, see Qin (2013a, Chapters 3 and 4).

4 This may be best illustrated by his joint work with Waugh on methods for handling linear trends (1933).

5 Haavelmo (1950) explained Frisch’s confluence analysis mainly with respect to multicollinearity. In his view, Frisch’s method was only tenable for single-equation models and lacked a probabilistic foundation.

6 A good example can be found from Lipsey’s (1960) model specification experiments to measure the unemployment effect, see also Qin (2013a, Chapter 5).

7 In statistics, those omitted variables are commonly referred to “confounding” variables.

Haut de page

Pour citer cet article

Référence papier

Duo Qin, « Inextricability of Autonomy and Confluence in Econometrics »Œconomia, 4-3 | 2014, 321-341.

Référence électronique

Duo Qin, « Inextricability of Autonomy and Confluence in Econometrics »Œconomia [En ligne], 4-3 | 2014, mis en ligne le 01 septembre 2014, consulté le 09 décembre 2024. URL : http://journals.openedition.org/oeconomia/883 ; DOI : https://doi.org/10.4000/oeconomia.883

Haut de page

Auteur

Duo Qin

Department of Economics, SOAS, University of London. E-mail: dq1@soas.ac.uk

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search