1Uncertainty and associated risk assessment is rapidly developing on large-scale industrial or environmental systems along with the dissemination of advanced quantitative modeling in support of decision-making, the increased public awareness or appetite for questioning expertise and the enforcement of tighter safety or environmental control standards. In fact, uncertainty may be viewed as consubstantial to any human activity well beyond the issues of applied science and decision-making. Yet, this paper concentrates on the narrower field of quantitative and quantifiable uncertainty, with a modeling view, as the subject of considerably growing interest in industrial or environmental fields. This may sound somehow restrictive to those having in mind the real-world risk situations where on the one side human and organisational factors play an essential role (as evidenced by classical examples of the Three-Mile-Island accident in the nuclear field or of the Challenger space shuttle in the aerospace industry) and on the other side uncertainty is to a large extent poorly or not at all quantifiable (think about quantifying lack of knowledge about the plausibility of September 11th or another similar event). Those aspects evidence obvious limitations to the applicability of uncertainty quantification, although some limited modeling contributions may still apply here and there, such as the human reliability probabilistic models. Quantitative risk and uncertainty remains however essential existing or future tools for the regulation of industrial activity or environmental control, as well as the decision-making in the corporate investment or public infrastructure or health.
2Indeed, a growing number of industrial studies involving quantitative models include treatments of the numerous sources of uncertainties affecting their results. Uncertainty treatment in physical, environmental or risk modeling is the subject of a long-standing theoretical literature rooted in fundamental statistical and economical thinking (Knight 1921; Savage 1954). It then developed in the risk and environmental assessment fields (Beck 1987; Granger Morgan & Henrion 1990; Helton 1993; Hamby 1994; Paté-Cornell 1996) as well as in metrology (ISO GUM, 1995) and signal-processing (Shannon, 1948; Cover & Thomas, 1990). Such developments included a central debate on the relevance of classifying the large variety of uncertainties encountered in two categories : namely the epistemic (or reducible, lack of knowledge, …) type referring to uncertainty that decreases with the injection of more data, physical knowledge or model runs and the aleatory (or irreducible, intrinsic, variability, …) type for which there is a variation of the true characteristics of the systems that may not be reduced by the increase of data or knowledge. More recently, uncertainty treatment gained large-scale industrial importance as a number of major applications gradually included some uncertainty treatment, including in particular : (i) nuclear safety studies involving large scientific computing (thermo-hydraulics, mechanics, neutronics etc.) or Probabilistic Safety Analyses (PSA); (ii) advanced design in the aerospace, automotive or more generally mechanical industries; (iii) oil exploration and underground waste disposal control; (iv) environmental impact studies.
3On the one hand, appropriate uncertainty management allows for improved system designs, more accountable forecasts, better natural and industrial risk control, and more robust performance, thereby generating an “internal/business” driver. On the other hand, many institutional bodies or regulators now demand technical justifications, including an understanding of the influence of uncertainties in industrial safety or environmental control, thereby generating an “external/regulatory” motivation. Both may take place at many different steps of the industrial cycle from upstream research to in-service operation and control (see Figure 1).
Figure 1: A European panel of recent industrial uncertainty studies (de Rocquigny et al 2008).
4Most recently, the ESReDA1 European network of industries and academics undertook a large review and methodological research resulting in a consistent generic framework applicable to most industrial studies (de Rocquigny et al 2008), in which the author of this paper took a leading role. For most of the uncertainty treatment undertaken in the industrial world, at least partial probabilistic modeling of the uncertainties is considered. Yet, deterministic uncertainty treatment is generally still involved and more isolated non-probabilistic ventures (such as fuzzy sets, evidence theory or possibilistic approaches) may be encountered, resulting in a rather heterogeneous set of decision criteria and differing interpretations of the probabilistic or non-probabilistic figures. One of the challenges that motivated such research was to properly formulate in consistent mathematical terms the variety of decision criteria encountered in practice, as a prior to direct meaningful uncertainty modeling.
5A number of generic challenges may be encountered in developing industrial uncertainty treatment:
-
cultural and organizational ones: the consideration of uncertainty disrupts to some extent the traditional engineering habits and regulatory settings and often faces claims of costly, confusing or insufficiently guided sophistication, as risk analysis proves generally difficult to communicate to clients or furthermore to the general public;
-
policy ones: decision-making under uncertainty requires the tricky choice of quantities of interest or risk measures (e.g. expected utilities) that should properly represent risk aversion or behavior in front of uncertainty;
-
scientific ones: the coupling of probabilistic and phenomenological modeling generate new concepts and open mathematical and epistemological issues;
-
- 2 Open Treatment of Uncertainty Risk aNd Statistics : www.openturns.org (Andrianov et al, 2007).
technical and computational ones: regarding for instance the need to tackle large-scale physical-numerical models as most design or operation control studies rely on sophisticated engineering processes involving complex numerical codes, all the more so since high performance computing unlashes ever-increasing meshing sizes or micro-macro equation formulations. They require large CPU budget to run and are fed by quite heterogeneous sources of information (noised data on various physical variables, incomplete expertise), far away from the closed-form examples originally studied in the literature. They lead to an essential scientific computing challenge, as any uncertainty treatment increases by orders of magnitude the number of runs required by best-estimate studies. Dedicated tools are required in support : the present research was closely related to the development of the Open Source software platform called Open TURNS2.
6This paper provides a theoretical review of the subject inspired by a large panel of cases in various fields of engineering and environmental studies, evidencing the emergence of a practical consensus upon such an old epistemological debate, and setting forward a number of top-priority open challenges. The remaining is structured as follows. Section §2 introduces the versatility of the concepts, recalling the main threads of scientific debates on the subject, to which the generic uncertainty treatment approach explained in §3 contributes a practitioner answer. In §4, three examples illustrate the versatility of the framework in environmental monitoring, industrial safety and natural risk assessment. §5 open up the discussion on a number of key practical challenges mentioned earlier regarding respectively the definition of risk criteria, and the data modeling, calibration and inverse methods. Conclusion and references end up the paper.
7As mentioned already in the beginning, uncertainty is the subject of long-standing epistemological interest as it stands in fundamental connection both with any type of modeling activity or the scientific consideration of risk. The former relates to the fact that any model endeavor brings along more or less explicit consideration of the uncertain deviation of the model to empiric evidence. Regarding the latter, risk and uncertainty analysis prove so much connected in applications that the precise delimitation between the practical meaning of the two terms does not appear to be central modeling-wise, as it essentially depends on the system definition and the terminological habits of the field considered: to put it as simple as possible, some would interpret risk analysis as the consideration of the possible consequences and associated uncertainties about what will be the consequences, while uncertainty analysis could be limited to describe the initiating events.
8Indeed, in the perspective of the present paper, the practicality of undertaking an uncertainty (or risk) study of a given system is generated by the following common features : (i) the fact that the state of the system considered, conditional to taking some given actions, is imperfectly known at a given time; (ii) the fact that some of characteristics of the state of the system, incorporated in a given type of “performance” or “consequence”, are at stake for the decision-maker. Because of (i), the best that may be looked after are possible or likely ranges for the variables of interest quantifying those characteristics ; more specifically, inference will be made under a probabilistically-defined quantity of interest or risk measure, such as an event probability, coefficient of variation of the best-estimate, confidence intervals around the prediction, value-at-risk etc. The rationale of risk or uncertainty modeling is to estimate those quantities, aggregating any information available on any type of variable linked to the system, plus the information brought by statements defining the system phenomenology and structure (physical laws, accident sequence analysis etc.). Combining both statistical and phenomenological models aims at producing the highest-informed inference, hopefully less uncertain than a straightforward prediction obtained through pure empiric data (frequency records of observations) or expert opinion.
9A large variety of causes or considerations give rise to practical uncertainty in the state of a system as defined in (i) here above, such as: uncertain inputs and operating conditions in the industrial processes; model inaccuracy (simplified equations, numerical errors, etc.); metrological errors; operational unknowns; unpredictable (or random) meteorological influence; natural variability of the environment; conflicting expert views etc. A rich literature has described the variety of natures of uncertainty and discussed the key issue of whether they should or could receive the same type of quantification efforts, particularly a probabilistic representation. Such a debate may be traced back to the early theory of probability rooted in the 17th century while modern thinking may be originally inherited from economics and decision theory, in close link with the renewed interpretation of statistics. (Knight 1921) has introduced the famous distinctions between “risk” (i.e. unpredictability with known probabilities) and “uncertainty” (i.e. unpredictability with unknown, imprecise probabilities or even not subject to probabilisation), although it is less often remembered how these early works already admitted the subtleties and limitations incorporated in such simplified interpretation regarding real physical systems. Indeed, economic and decision theory literature have generally restricted to simple decision settings, such as closed-form quantitative lotteries, without tackling in details the physical bases of industrial systems. Closer look evidences that various forms of uncertainty, imprecision, variability, randomness, model errors are mixed inside phenomenological data and modeling. Think for instance about riverbed roughness for which the topography at a time is generally incomplete, varies significantly in time during or after flood events, and is incompletely modeled within simplified hydrodynamics and associated numerical riverbed mesh. Knowledge of the systems is seldom as binary as incorporated in such statements as “probabilities are known” vs. “unknown” so that modern risk analysis would not generally stick to such historical distinctions.
10Indeed, tackling real-scale physical systems has been further discussed on that issue within the risk assessment community, taking a large extent in the 1980s and 1990s in close link to the development of US nuclear safety reviews or environmental impact assessments (Granger Morgan & Henrion 1990; Apostolakis 1990; Helton 1993). As already mentioned, debate has notably concerned the classification of the large variety of uncertainties encountered in large industrial systems into two salient categories, and the rationale for separating or not those two categories : namely the epistemic or reductible type with respect to the injection of more data, modelingphysical knowledge or model runs and the aleatory type, that proves irreducible. To a certain extent, this epistemic/aleatory distinction may be viewed as refinement of the early economic distinction between uncertainty and risk. Note in fact that the practical reducibility of some of the uncertainty sources does not practically equate their “epistemic” (or theoretical reducibility) nature : reducibility also involves some industrial/practical constraints, and even a cost-benefit point of view, and there can be a continuum between “strictly irreducible” or “reducible” : in some cases, “epistemic” uncertainty cannot be reduced : e.g. very expensive measurements, or even non-feasible without damaging the equipment …
11Finer distinctions include (Oberkampf et al 2002 ; Frey & Rhodes 2005; de Rocquigny 2006):
-
"variability" vs. "uncertainty" : this distinction, not far but not completely equivalent to the preceding one, is used more specifically when the system inputs mix a population of objects (or scenarios), a spatial distribution of properties within a system, or even a temporal distribution of properties affecting a system
-
"epistemic uncertainty" vs. "error", is a finer distinction depending on whether the ignorance or subjective uncertainty is “inevitable” or "deliberate" in spite of the availability of knowledge.
-
uncertainty that is "parametric" (associated with model inputs according to the level of information available on those inputs) vs. "modeling uncertainty" affecting the adequacy of the model itself to reality (structure, equations, discretisation, numerical resolution etc.)
12Accordingly, some authors link the choice of particular mathematical settings to such different natures of uncertainty involved in the system studied. On the one hand, the key aleatory/epistemic distinction has motivated the use of double probabilistic settings (Helton 1993; Apostolakis, 1999). This means a probabilistic description of the uncertain (aleatory) states of the system, for instance failure probabilities for uncertain structural failures or gaussian standard deviation for the variability of an environmental feature, upon which a second level of (epistemic) probabilistic description represents the lack of knowledge or expert discrepancies in the proper parameters of such aleatory phenomena. Conversely, such distinction is considered to be impractical to others. Measurement uncertainty is an example of practical source within which it is hard to discern epistemic and aleatory components : the (ISO 1995) international standard has in fact cancelled the mentioning of epistemic/aleatory structure as a mandatory distinction in the analyses and treatments, which receive therefore a common probabilistic sampling approach. Similarly, major developments have approached the impact of uncertainty onto complex numerical models with the help functional and numerical analysis, statistical computer science (design of experiment) under the name of sensitivity analysis : (Cacuci 1980) initiated the exploration of the impact, and major developments occurred since with an extension to global analysis (Sobol, 1993; Saltelli et al, 2004), within which single probabilistic settings are systematically used whatever the nature of uncertainty involved. Eventually, probabilistic treatment of uncertainty has been criticized by a number of authors, particularly for those sources of uncertainty affecting physical systems not amenable to be characterized as aleatory (or randomness) or as variability, hence renewing the early distinctions made by the economic literature (Granger Morgan & Henrion, 1990).
13Yet, when moving into supporting practical decision-making, industrial experience evidences that the variety of settings used to represent uncertainty does preserve a core of essential features, amenable to a unified framework approach. This section will summarise the generic methodology that has emerged as the product of a number of recent papers, and particularly with the ESReDA European industrial group (de Rocquigny, 2008).
14Quantitative uncertainty assessment in the industrial practice typically involves:
-
a pre-existing physical or industrial system or component lying at the heart of the study, that is represented by a pre-existing model,
-
inputs, affected by a variety of sources of uncertainty,
-
a variety of actions, i.e. design, operation or maintenance options or any type of controlled variables or events that enable to modify or control to some extent the system performance or impact
-
an amount of data and expertise available to calibrate the model and/or assess its uncertainty,
-
industrial stakes and decision-making that motivate the uncertainty assessment more or less explicitly. They include: safety and security, environmental control, financial and economic optimization etc. They are generally the purpose of the pre-existing model, the output and input of which help handling in a quantitative way those stakes within the decision-making process.
Figure 2 – The pre-existing or system model and its inputs / outputs
15The pre-existing system may refer to a very wide variety of situations, such as: a metrological chain, a dam or hydraulic facility, a maintenance process, an industrial or domestic site threatened by flood risk, etc. Talking about quantitative studies of uncertainties, that system will generally be modeled by a single numerical or a chain of models referred to as the system model or pre-existing model: anything from the straightforward analytical formulae to coupled 3D finite element hydrodynamic or comprehensive hydrological models etc. standing as a numerical function linking output variables z = G(x , d) to a number of continuous or discrete input variables or parameters, where:
-
some input variables (noted x = (xi)i=1…p, underlining denoting vectors throughout the paper) impacting the state of the system are uncertain, subject to randomness, variability, lack of knowledge, errors or any sources of uncertainty,
-
while other inputs (noted d ) are known or fixed, either being well known (e.g. actions or design choices, controlled experimental factors for calibration …) or being considered to be of secondary importance with respect to the output variables of interest.
16Note that computation of the pre-existing model z = G(x , d) for a point value (x ,d) (i.e. not uncertain at all)may require a very variable CPU time: depending on the case, from 10-4s to several days for a single run. Within the potentially large number of raw outputs of the models, it is useful to sub-distinguish the model output variables of interest that are eventually important for the decision criteria are included formally within the vector z = (zv)v=1..r: most of the time, z is a scalar (z) or a small-size vector (less than 5 components). Although the state of the system may be of high dimension, the decision-making process involves essentially one or few variables of interest such as: a physical margin to failure, a net cost, an accumulated environmental dose, or a failure rate in risk analysis.
17It is worth insisting on the fact that the undertaking of an uncertainty study is linked to two underlying facts: (i) the state of the system considered, conditional to taking some given actions (d), is imperfectly known at a given time; (ii) some of the characteristics of the state of the system, incorporated in a given type of “performance” or “consequence” represented by the variables of interest (z), are at stake for the decision-maker. Think of :
-
the occurrence of flooding over a given dike height within the next year, essential for safety, cannot be predicted exactly because of meteorological unpredictability or lack of knowledge of flooding phenomena,
-
the true length of a device at the time it is measured within a given experimental setting, an essential feature to secure the fit with the assembling specifications, is imperfectly known due to internal noise or uncontrolled and influential environmental factors affecting the sensor,
-
etc.
18Although a pre-existing model may be defined as a theoretically-causal or deterministic relationship between precise quantities characterising the state of the system, the consideration of uncertainty about the system means of course that we do not have the ambition of predicting exactly the realization of z based on precise-enough information on predicted values for x and e (i.e. representing the true state of the system). By the above definition, there will always be lack of knowledge to some extent of the state of the system. Modeling under uncertainty means that the information available – be it observations or expertise on those inputs - could merely characterise the possible or likely values for x. Hence, the model should help to infer possible or likely ranges for z deduced from that characterisation of the inputs.
19Whatever the paradigm chosen, there is a need for defining a distribution or measure of uncertainty describing those possible or likely values, which will be referred to as the (input) uncertainty model. The uncertainty model will encode the lack of knowledge remaining after incorporation of all information available to describe the inputs, either observed data or expertise. For instance :
-
deterministic (interval computation) : uncertain inputs are varying within physically-maximal or merely plausible ranges
-
probabilistic : a joint continuous or discrete distribution describing the input uncertainty within random vector X
-
Dempster-Shafer : set of belief and plausibility distribution function describing the input uncertainty (Dempster 1967; Helton & Oberkampf 2004)
20While literature has discussed to a large extent the pros and cons of various probabilistic or non-probabilistic paradigms (for instance Helton & Oberkampf 2004), it is assumed later on that probabilistic uncertainty treatment is acceptable for the cases considered : meaning that, whatever the final goals, decision criteria will generally be built upon the probabilistic distribution Z representing uncertainty on the output variables of interest or some of its characteristics, such as : standard deviation, variance or coefficient of variation (i.e. easily interpreted as percentages of relative error/uncertainty) on one or more outputs, confidence intervals on the variable(s) of interest, probabilities of exceedance of a safety threshold etc.
21Formally, consider thus a probabilistic setting, involving a sample space X representing the space of possible values for the input variables x in which X = (Xi)i=1…p ~ FX(xθX) whereby FX indicates a joint cumulative probability distribution function (cdf) associated to the vector of uncertain inputs X (capital letters denoting random variables), defining the uncertainty model. Parameters of FX are grouped within the vector θX: they include for example statistical parameters of position, scale and shape, or moments of marginal laws, coefficients of correlations (or parameters of the copula function), or even extended parameters of a non-parametric kernel model. Consequently, the output variablesZ inherit also a random model, the distribution of which being generally unknown and costly to compute if G is a complex system model : quantifying the uncertain behaviour of such outputs is in fact the purpose of uncertainty propagation methods which will receive more attention hereafter.
22Note also that multiple epistemological interpretations of such a probabilistic setting are possible, while still resulting in the same mathematical objects and computation needs. Indeed, frequent uses are made of such a standard probabilistic setting in regulated practice such as metrology (ISO, 2005), pollutant discharge control or nuclear licensing without positively choosing a single theoretical interpretation. This discussion is closely linked to that of the various natures of uncertainty about the state of the system which may be represented, such as natural time or space variability, lack of knowledge etc. : Section 3 will come back to that discussion. At this stage, only preliminary comments will be given. The frequentist interpretation considers x and z=G(x,d) as observable realizations of an uncertain model standing for an underlying population of truly variable systems (in time, space, pieces …) and the “true” uncertainty model can hence be theoretically inferred from a very large set of data. The subjective interpretation considers probability distributions as subjective preferences of the decision-maker, without the need for an underlying repeatable phenomenon with observable frequencies as appropriate when dealing with a unique system.
23As discussed in the §2, it is usual in some cases to explicitly distinguish between the aleatory and epistemic components of uncertainty. Well beyond the epistemological debate, such a distinction is crucial because of its significant analytical, computational and decision-making consequences in the sense that the study process could highlight where uncertainty reduction, be it in data collection or model refinement, is most instrumental, yet with a corresponding cost in complexity. Mathematically, key differences regard the probabilistic definition of the uncertainty model, which may be summarised as the potential distinction of two levels requiring simple or double probabilistic settings :
-
level – 1 (mandatory) : an uncertainty model is built upon the inputs X, representing through a given probabilistic distribution FX (x dX) the extent of uncertainty. This distribution is characterized by given parameters θX,
-
level – 2 (optional) : an additional uncertainty model is built to represent the variability or lack of knowledge on the precise values of parameters θX that prescribe the extent of uncertainty : it takes the form of a supplementary distribution encoding the random behaviour of those parameters X(X| ), as a function of “hyper-parameters” .
24It is often referred to in the literature as the aleatory-epistemic distinction (Helton & Burmaster, 1996), the level-1 modeling the aleatory (or “risk”) component while the level-2 models the epistemic (or “uncertainty”) component. Any type of risk or uncertainty analysis would always develop the first level as a model of the uncertainty about the variables or events of interest. Such an uncertainty model (namely its parameters θX) needs to be estimated or inferred on the basis of all available pieces of information, such as data or expertise. More elaborate interpretations and controversies come up when considering the issue of the lack of knowledge regarding such uncertainty description, as generated for instance by small data sets, discrepancies between experts or even uncertainty in the system model. This is particularly the case in the field of risk analysis or reliability, but happens also in natural risk assessment, particularly flood risk. The incorporation of observed data to infer or validate the level-1 probabilistic model, when done through classical statistical estimation, generates statistical fluctuation in the parameter estimates: this is made obvious by confidence intervals around e.g. high-level flow quantiles or return periods.
25This is even more so when considering the choice of the distribution for an input xi for which traditional hypothesis-testing techniques give at most only incomplete answers : is the Gaussian model appropriate for the distribution of xi, as opposed to, for instance, a lognormal or beta model ? This results in a sort of “uncertainty about the uncertainty (or probabilities)”, although this formulation is controversial in itself and may be more specifically referred to as epistemic uncertainty about the aleatory (or variability) characteristics. This is where a potential level-2 uncertainty model may be contemplated to represent the uncertainty about the parameters or even the laws themselves through an extended parameterisation. It is sometimes referred to as estimation uncertainty, in order to mark its strong link with the step of the process where the uncertainty model is estimated.
26As introduced earlier, the combination of the system model and the uncertainty model should help to infer possible or likely ranges for z. Within a probabilistic framework, dedicated probabilistic quantities are wanted to measure the likelihood of values for z. Such a quantity used for the inference of the outputs of interest under uncertainty will be called a quantity of interest (de Rocquigny Devictor Tarantola, 2008), otherwise referred to as a performance measure (Aven, 2003) or risk measure in the finance and economics. Some examples :
-
percentages of error/uncertainty on the variable(s) of interest (i.e. coefficient of variation)
-
confidence intervals on the variable(s) of interest
-
quantile of the variable of interest (such as the value at risk or VaR in finance), possibly conditional on penalised inputs
-
probabilities of exceedance of a safety threshold or of an event of interest (sometimes termed “assurance” when considering conditional non-exceedance probabilities)
-
expected value (cost, utility, fatalities …) of the consequences
-
confidence intervals on the probability of an event
27In any case, the quantity of interest or risk measure will be subsequently noted cz (d) since it is computed to represent the likely values of the outputs of interest z and hence crucially depends on the choice of actions d that modifies the anticipated state of the system, albeit to an uncertain extent. A closer mathematical look evidences that one could always write these quantities in the form of a functional of Z,cz = F[Z] typically requiring a computational process including multiple integration (hence not being a straightforward function). Assuming a double probabilistic setting, quantities of interest become formally more complex as they may extend to both levels, such as the range of possible values or confidence interval for the coefficient of variation or the exceedance probability. Mathematically, the quantity of interest c(2)z is then a functional of the extended uncertainty model {FX (. X ,d), }, again involving multiple integrals because a probabilistic second-level introduces typically a second layer of integration on top of the level-1 quantities of interest. In other words, computing such a level-2 expectation involves a double integration, meaning “averaging the risk measure over the uncertainties in the risk components”. Such quantity requires generally a highly-costly double loop of Monte-Carlo sampling (such as 100 * 10 000 runs). It is typically integrated in a decision criterion requires that “frequency of the undesired event of interest is less than 1/1000-yr at an -level of confidence”, encountered in protection against natural risk or integrated probabilistic risk assessment.
28The quantity of interest or risk measure is the key quantitative tool used in make decisions under uncertainty. From a theoretical standpoint, risk and uncertainty studies should eventually help in making decisions about appropriate actions in the context of a relative lack of knowledge about the state of the system. Decision-making is always about relative preference used to select within the scope of available actions. Hence the quantity of interest cz (d) should be viewed as an appropriate means to rank the options in the face of uncertainty. Classical literature (Savage 1974 ; Bedford & Cooke 2001) evidences for instance the expected utility as a central quantity of interest, the utility being a scalar function computed on the output of interest u(z) that represents the decision-maker preferences (e.g. his risk aversion) ; “expected” meaning the expectation over the probabilistic uncertainty model on X, and hence Z=G(X,d), a random vector which is interpreted to represent the decision-maker’s subjective belief about the possible states of the system.
29Yet, in more practical terms, uncertainty studies are not always implemented up to the completion of that ultimate theoretical goal, select within a scope of actions. In many cases, regulation specifies the compliance with an “absolute” criterion. Far-stretching decision-making would weigh the relative preference between building a nuclear power plant (e.g. under seismic risk) vs. operating a hydro-electric facility (e.g. under flood risk and climate change impacts); or between accepting a standard device that has been controlled with a cheap (medium-accuracy) vs. an expensive (high-accuracy) metrological protocol. In fact, operational regulation or practices would often prefer to specify a fixed decision criterion (or risk acceptance criterion, tolerance specification etc.) to comply with, such as :
-
“design should ensure that the mechanical failure margin should remain positive for the reference seism in spite of uncertainty, with a probability less than 10-bto be negative“
-
“system redundancy design and maintenance should guarantee that frequency of plant failure should be less than 10-b per year, at a 95% confidence level covering the uncertainties”
-
“there should be less than 3% uncertainty on the declared measurement value for the output of interest”
-
“up to the - initiator at least (e.g. 1000-yr flood), installation is safe under designd”
-
“range of the output variable of interest should be always less than 20%” or “maximal value of the variable of interest should stay below a given absolute threshold”
-
…
30Such decision criteria mostly appear as tests derived from the quantity of interest, comparing the quantity of interest to a fixed threshold cz (d) < cs. Note that this type of decision criterion may be differently interpreted whether one considers the probabilistic modeling as a representation of subjective degrees of belief over the imperfectly known states of the system (such as in the theory of expected utility) or modeling observable objective frequencies of possible states within a sequence of time of the system or a population of similar systems perform (such as in the frequentist approach), or even both (as in a Bayesian approach).
31To complete the picture it is necessary to consider also that many risk and uncertainty studies are undertaken in relatively upstream stages, not yet facing explicit decision-making processes or even less regulatory criterion. This is the case in upstream development of phenomenological models or numerical code qualification. Even when these models and codes are designed to ultimately help decision-making or industrial certification, the goal of upstream uncertainty studies may be firstly to understand the extent of uncertainty affecting the model prediction, particularly the relative importance of input uncertainties in order to intensify measurement or modeling efforts on the most important ones. Once a better understanding of the uncertainties and of the behaviour of the system model is available so that it is considered to have the potential of a later operational use, it may be necessary to establish formally its range of validity and to control its residual prediction uncertainty : a more or less sophisticated calibration, validation or qualification process aims to accredit its validity. Although those two goals may be understood as only indirectly linked with decision-making, it may be seen that the quantity of interest is still the key quantitative tool involved. For instance, importance ranking of the uncertain inputs or the proper model calibration is different whether variance or a failure probability on the output of interest is selected as the quantity of interest, as discussed in (de Rocquigny et al., 2008). To summarise, industrial practice shows that, most of the time, the goals of any quantitative risk/uncertainty assessment belong to the following four categories:
-
U (Understand): To understand the influence or rank importance of uncertainties, thereby to guide any additional measurement, modeling or R&D efforts.
-
A (Accredit): To give credit to a model or a method of measurement, i.e. to reach an acceptable quality level for its use. This may involve calibrating sensors, estimating the parameters of the model inputs, simplifying the system model physics or structure, fixing some model inputs, and finally validating according to a context-dependent level. In a sequential process it may also refer to the updating of the uncertainty model, through dynamic data assimilation.
-
S (Select): To compare relative performance and optimize the choice of maintenance policy, operation or design of the system.
-
C (Comply): To demonstrate compliance of the system with an explicit criterion or regulatory threshold (e.g. flood control, dam safety, nuclear or environmental licensing, etc.).
32Moreover, the feed-back process proves an essential item in practical risk/uncertainty studies : for instance, failure to secure compliance for a given design may lead to change the actions in order to fulfill the criterion : this may involve changing the design itself in order to cover more conservatively the uncertainties, or investing more on information in order to reduce the sources of uncertainties, provided that those are truly reducible on the basis of additional scientific evidence. This would obviously rely on ranking the importance of uncertain inputs or events, and possibly require a new accreditation of the altered measurement or modeling chain before being able to use it in order to demonstrate compliance.
33Remark also that quantities of interest may differ as to the extent to whichthe quantity computed on the output of interest z correspond to probabilistic or deterministic uncertainty models on the various uncertain inputs x. Indeed, it should be reckoned that in most studies, besides describing some of the uncertain inputs by probabilistic distributions, at least some other uncertain inputs (variables or events) are fixed. This is because :
-
for some model inputs, the decision-process will conventionally fix the values despite of the acknowledgement of uncertainties: for comparative purpose, by a conventional “penalisation” i.e. the choice of a fixed “pessimistic” scenario etc.
-
uncertainties affecting some model inputs are considered to be negligible or of secondary importance with respect to the outputs variables of interest
34Noting xpn these inputs, a more precise specification of the quantity of interest should involve the explicit conditioning on those fixed inputs cz (d ) = cz (d | xpn). As an essential point to build a generic description or risk and uncertainty modeling, such a notation appears to be a convenient mean to unify the deterministic or probabilistic descriptions of uncertainty.
35At this stage a risk/uncertainty study can be formally summarized as involving a number of generic tasks that are summarized within the following main steps (see also Figure 3):
36(A) specify the system model, variables of interest, the setting and quantity of interest (risk measure) :
-
choose variables z according to decision-making, as well as a system model G(.) predicting z as a function of decision variables (d) and all input characteristics (x) defined as to retrieve information and represent uncertainty affecting the system
-
choose a representation of uncertainty, e.g. associating probabilistic and deterministic choices
-
FX (x X ,xpn ,d)
-
choose a quantity of interest in accordance with the decision-making process
37(B) identify (i.e. estimate) the uncertainty model FX (. X ,d) on the basis of available data and expertise
38(C) compute the quantity of interest cZ (d), i.e. propagate the input uncertainty model through G(.) to estimate the output distribution FZ (z d) and/or the associated quantities of interest and sensitivity indices
Figure 3 – generic conceptual framework
39Step A has been introduced in the previous paragraph; subsequent step B (data modeling) and C (computational propagation and sensitivity analysis) will be briefly described hereafter.
40Once the sources of uncertainty and corresponding input variables have been identified, there is inevitably a step of uncertainty modeling (or quantification, characterisation etc. of the sources of uncertainty) that depends on the type of quantities of interest chosen .
-
in a probabilistic framework, the uncertainty model will be theoretically a joint cdf on the vector of uncertain inputs (x), although it may be more simply specified as a set of simple parametric laws (e.g. Gaussian) on the components with some independence hypotheses or approximate rank correlations.
-
in a non-probabilistic framework, the uncertainty model would be for instance a Dempster-Shafer couple of plausibility / belief functions on x
-
in a deterministic framework, the maximal range on each component of x
-
Whatever the framework, there is always however the same need to take into account the largest amount of information :
-
direct observations on the uncertain inputs, potentially treated in a statistical way to estimate statistical models
-
expert judgment, under a more or less elaborate elicitation process and mathematical modeling : from the straightforward choice of intervals to more elaborate Bayesian statistical modeling, expert consensus building …
-
physical arguments : e.g. however uncertain, the input remains positive, or below a known threshold for physical reasons …
-
indirect observations : this is the case when the model is calibrated / validated and may involve some inverse methods under uncertainty
41in order to build a satisfactorily “uncertainty model”, that is to say a measure of uncertainty on the inputs. Uncertainty modeling may be a resource-consuming step for data collection ; it appears in fact to be a key step that is very sensitive to the result, depending on the final goal and the quantities of interest involved : for instance, choice of upper bounds or distribution tails become very sensitive if the quantity of interest is an exceedance probability.
42Once an uncertainty model is developed, the computation of the quantities of interest involves the well-known uncertainty propagation step (also known as uncertainty analysis). The uncertainty propagation step is needed to transform the measure of uncertainty on the inputs onto a measure of uncertainty on the outputs of the pre-existing model. In a probabilistic setting, this means estimating the cdf of z = G(x , d) knowing the cdf of x and given values of d, G(.) being a numerical model. According to the quantity of interest, and to the system model characteristics it may be a more or less difficult numerical step, involving a large variety of methods such as Monte-Carlo Sampling, accelerated sampling techniques (Rubinstein 1981), simple quadratic sum of variances (ISO 1995), Form-Sorm or derived reliability approximations (Madsen et al, 1986), deterministic interval computations etc. Prior to undertaking one of these propagation methods, it may also be desirable to develop a surrogate model (equivalently referred to as response surface, meta-model …), i.e. to replace the pre-existing system model by another one that leads to comparable results with respect to the outputs variables and quantities of interest, but that is however much quicker / easier to compute.
43The sensitivity analysis step (or importance ranking) refers to the computation of so-called sensitivity or importance indices of the components of the uncertain inputs variables x with respect to a given quantity of interest on the output z. In fact, this involves a propagation step, e.g. with sampling techniques, but also a post-treatment specific to the sensitivity indices considered, involving typically some statistical treatment of the input/output relations that generate quantities of interest involving both the measure of uncertainty on the outputs and inputs. A large variety of probabilistic sensitivity indices (Saltelli et al 2004) include for instance the: Graphical methods (Scatter plots, cobweb…), Screening (Morris, sequential bifurcations…), Regression based techniques (Pearson, Spearman, SRC, PRCC, PCC, PRCC, etc.), Non-parametric statistics (Mann-Whitney test, Smirnov test, Kruskal-Wallis test), Variance-based decomposition (FAST, Sobol, Correlation ratios), or local sensitivity indices on exceedance probabilities (FORM). Note that the expression “sensitivity analysis” is taken here in its comprehensive meaning encountered in the specialised uncertainty & sensitivity literature : in the industrial practice, the same expression may refer more approximately to some elementary treatments, such as the one-at-a-time variations of the inputs of a deterministic model or the partial derivatives. As such, these two kinds of indices are usually not suitable for a consistent importance ranking although they may be a starting point.
44(de Rocquigny et al 2008) evidences that the best choices for such challenging computational steps do not depend on the particular specificities of a physical or industrial context as such, but onto the generic features identified hereabove : the computing cost and regularity of the system model, the dominant final goal, the quantities of interest involved, the dimensions of vectors x and z etc. Recommendations drawn from the long-standing experience of other fields of engineering and risk assessment may hence be retrieved in order to guide the emerging uncertainty applications in hydrological and hydraulic modeling. Indeed, the methodology has now enjoyed considerable industrial dissemination through tutorials and supports the open source development platform Open TURNS.
45Three typical industrial examples are provided to illustrate the framework, while a much larger spectrum may be found in (de Rocquigny, 2008). Remember that those examples were indeed more an inspiration than a basis to the global methodology introduced hereabove: it was by removing the historic barriers between the sectoral approaches to uncertainty propagation and analysis – barriers that have been observed in recent industrial applications (metrology, reliability, risk analysis …) - that generic uncertainty problems could be formulated and placed in a consistent mathematical setting.
46The first industrial example comes from the domain of metrology. In the domain of metrology or quality control, uncertainty analysis or control is quite a basic requirement associated to the qualification of any measurement device, chain of devices or production process ; through sensitivity analysis, it is also a forcible means to optimise costs and productivity of monitoring and quality control. It may sometimes even be mandatory in the official regulations, such as in nuclear maintenance or radiological protection, or environmental control : for instance, in application of European undertakings relating to the section of the Kyoto protocol concerning industrial emissions, a system of CO2 emission permits has been established requiring uncertainty declaration in order to secure the fairness between real emissions and emission rights and stability of the financial markets for emissions permits, through the variance.
- 3 Including the noises affecting the operation of the sensor, local fluctuation of the measurement (...)
47Probabilistic approaches are traditional in the field of metrology, whether environmental or not. It is already embodied in international standards such as the "Guide to the expression of uncertainty in measurement (GUM)" (ISO GUM, 1995): rather than aggregate elementary uncertainties in a deterministic manner, it is based on the standard hypothesis that the uncertainty associated with each sensor or measurement operation3, in comparison with a reference value, has an aleatory character that is efficiently modelled by probability calculations. To be precise about the reference value with respect to which uncertain deviations are considered, the standard recommends referring for preference to "the best value available" rather than the physical "true value". On the one hand, the specification or definition of the magnitude to be measured is never infinitely precise: at a micro-physical scale, complex temperature or external artefacts (such as background radiations, physical-chemical surface exchanges, …) constantly impact the length of a given rod at a small time scale or even blur the notion of frontier basing the definition of length. On the other hand, should the quantity be very precisely defined, the true value is never observable, but only approached by sensors based on metrological benchmarks that are themselves uncertain. Zooming further down into quantum mechanics, Heisenberg uncertainty principle establishes that any physical system is intrinsically uncertain, and perturbed by observational devices. The GUM thus recommends not working on "error", defined as the absolute variance between measurement result and (unobservable) true value, but rather on "uncertainty" representing the fluctuation of the result of a measurement supposed to have reasonably corrected all the systematic effects compared to a benchmark. The aleatory variable representing this uncertainty may include a "bias" (which therefore remains relative) if its expectation value is not null, even if some people reserve the term "uncertainty" to the aleatory variable of expectation value null, restricted to fluctuation after deduction of any bias.
48Supposing that this framework is accepted, a probabilistic quantity of interest is required in regulatory studies of uncertainties. This consists in determining "an enlarged uncertainty" cz=%uncZ defined by a confidence interval of = 95% around the measured mass tCO2, the ratio of which should not exceed a given threshold e.g. 3% max. of uncertainties around the declared value. According to a classical Gaussian linear approximation, this criterion may be linked to a given multiple of the coefficient of variation, i.e. the ratio of standard deviation to the measured mass.
49Physically, in the case of the CO2 emissions from a coal-fired power plant (Figure 4), direct measurement of emissions by a single sensor has proved unreliable. Several options are possible to measure the number of tons of CO2 emitted (tCO2) : (a) flow-meters with integrated weighing mechanisms, (b) inference from the electric power produced for a specific consumption, (c) inventory of stocks and inputs. They all imply the aggregation, through simple analytical operations, of several elementary measurements such as the tonnage of coal consumed, the lower calorific power and the corresponding emission factor or oxidation factors. In the example of option (c), coal consumption Cc is itself deducted from the balance of tonnage supplied and variation in stocks is measured in cubature.
Figure 4 – Thermal power plant emissions (left) and metrological steps (right)
50Mathematically, it is therefore a question of aggregating the uncertainties associated with each variable xi measured by each i-th operation or elementary sensor (generally evaluated by the suppliers during design) within the function z = G(x1,…xp) = G(x) expressing the variable of interest z, for example the annual tonnage of CO2. In such context, the system model referred to earlier is simply a closed-form chain of elementary relations representing the metrological operations. Metrological sources of uncertainty Xiare classically modeled as Gaussian: this hypothesis is sometimes supported by argument based on the symmetry of errors and the existence of multiple underlying additive physical phenomena. Yet, when uncertainties are bounded, for example because of a command and control mechanism, the choice of a uniform or even triangular distribution may be preferred. The expected value is generally taken to be equal to the result of the calibrated measurement and the variance supplied by the characteristics of the sensor. Linear or rank correlation coefficients account for potential non-independence of uncertain inputs.
51Moreover the epistemic (or level-2) uncertainty in characterising the aleatory behavior of elementary devices may be explicitly included in metrological uncertainty modeling. For instance, repeatability data-based empiric variances can be multiplied by a factor greater than 1, decreasing with the number of measures n used to estimate it, instead of simply taking the gross estimated value: this accounts for the possibility, through the effect of sampling fluctuation, of under-estimating the unknown true variance. Yet, such practice is heterogeneous, because n is not always known at the industrial end-user. This question becomes particularly important when the cost of determining sources of uncertainty is high, because of the necessary tests involving burdensome basic operations. For instance in the nuclear field, this includes the reception of fuel assemblies before reloading: the measurement process is very demanding, but the risk associated with too great a level of uncertainty is serious, since mistaken acceptance of a fuel assembly too large for the core grid would result in the unavailability of the plant.
52Propagation (or aggregation, combination etc.) of elementary uncertainties is undertaken by one of the two methods accepted by international standards: either an approximate linear Gaussian combination, or a more accurate Monte-Carlo sampling. Both also enable ranking the relative importance of sources of uncertainty, an output of key value to the industrial process. For the final result, the i-th input is only important insofar as both input uncertainty uncXi is strong and/or the physical model is sensitive to it (via ∂G / ∂Xi). In the CO2 example, the choice of a measurement chain for subsequent checking of CO2 emission was based on a minimised criterion %uncZ., but also used the importance ranking of the sources, ensuring that the heaviest contributions corresponded to sources that could not only be tracked for quality assurance, but were if possible reducible.
53The second example comes from the field of structural reliability analysis and design margins. Design of industrial structures, be it in the nuclear, aerospace, offshore, transport etc., has generated the need for some rules or codes (Ellingwood, 1994) to prevent failure mechanisms, and secure reliability levels in the face of highly diverse sources of uncertainty affecting the operating systems (variability of material properties, of operational loads, fabrication tolerances, …).
54Beyond empiric design margins, a whole fare of methods are usually referred to in structural reliability analysis (Madsen et al, 1986) to tackle a structure characterized by an event of interest, or more generally a group of events, leading to failure. Failure is a matter of structural definition : it may include a number of so-called failure modes, or physical phenomena such as brutal collapse, or just crack initiation : they may happen under a certain number of conditions on two types variables affecting the system behavior, the design variables (again noted d) and the physical variables (noted x=(xi)i=1…p). A failure function G(x,d), i.e. the system model of the example, encapsulates this knowledge of the different phenomena leading to failure.
55Consider for instance the simplified example of the reactor vessel in nuclear safety : undesired failure could theoretically occur under the effect of abnormal pressure-temperature of the primary fluid, itself subjected to an internal initiator such as the drop in pressure following a pipe break elsewhere in the circuit. Stress upon a pre-existing flaw inside the vessel width could then exceed the resilience margin of the material resulting in a failure event either defined as the sudden rupture or, more conservatively, in the initiation or flaw propagation. Failure modeling involves firstly a complex finite-element thermo-mechanical model y= M(x,d) predicting stress and temperature fields as a function of numerous variables (properties of materials, flaw characteristics, the thermodynamics of accidental transients, the radiation received over time and the resulting fragilisation, etc.) as well as design or operational conditions d (such as temperature and pressure limits, recovery times etc.). Hence the failure margin z, representing the variable of interest, is computed by subtracting a stress intensity factor (noted K1(y,x,d)) from a toughness function (noted K1c(y,x,d)) defining altogether the failure function G(x,d)= K1c (y,x,d)-K1(y,x,d). Hence G(.) depends on numerous parameters (x,…xy) = x subject to multiple uncertainties, such as: properties of materials, flaw characteristics, the thermodynamics of accidental transients, the radiation received over time and the resulting fragilisation, etc..
56The field of structural reliability has a long history of standards and regulations in risk industries such as the nuclear or aerospace. The criteria specify reliability requirements according to the components concerned, that is to say an absence of failure during a given period and for a given scenario, e.g. a conventional accident with hypotheses d concerning the structure. This generally means ensuring that the failure function remains positive over a wide range of possible input values. The traditional method consists in "penalising" with safety margins those variables x that are sources of uncertainty, by applying coefficients or safety factors f i to the "best-estimate" valuesand verifying reliability zpn=G(xpn,d) > 0 by a deterministic calculation using the penalised parameters .
57This approach is generally referred to in the industry as a deterministic approach or if one prefers, as an approach through "penalised scenarios" : it involves an elementary form of deterministic treatment of sources of uncertainty pre-supposing again that x -> G(x,.) considered component per component xi is monotonous, a mostly intuitive situation in fracture mechanics, albeit less straightforward in fluid mechanics. It has the considerable advantage of limiting the number of computations to one or a few runs on potentially complex mechanical models. It however requires agreeing on reasonable upper limit of uncertainty for every component xi (or a lower one, depending on the direction of the monotony). As the identification of physically-realistic maximal values may be intractable or lead to controversial expert debates, it can happen that certain limits correspond to approximate quantiles αi for each component implicitly modelled as a random variable. For instance, a resistance property would be taken at its lower 95% value accounting for variability of materials, while a loading variable would be taken at its upper 95% value accounting for lack of knowledge or randomness of the operating conditions. This is also referred as a “partial safety factor” approach (Ellingwood, 1994), whereby safety factors are defined for each partial component xi of the vector x conditioning overall safety. An important literature continues to discuss the interest and inherent conservatism of this kind of approach which in fact aggregates potentially heterogeneous margins reflected by the quantiles, in a barely controllable manner. Experience shows it is quite difficult to translate this into an overall probabilistic risk level that could be comparable to other risk situations.
58For some years now the debate has been fuelled by comparison with approaches known as "probabilistic", explicitly modeling the sources of uncertainty by random variables X i, focusing on the probabilities of threshold exceedance and failure in the scenario defined by Pf(d) = P[G(X,d) < 0], and seeking to ensure that it is very close to 0. In fact it is frequently found that one only "probabilises" part of the uncertain inputs, which in fact comes down to a partial conditioning with the penalised values of other sources. Hence Pf will be compared to a threshold, or at least the probability of a reference scenario d0, considered as reliable enough, if an absolute threshold in 10-b is too delicate to specify. Formally, these approaches known as "probabilistic" should rather be seen as yet another kind of mixed deterministic-probabilistic uncertainty assessments.
59Faced with this criterion, can uncertainty studies make use of either of the methods of propagation already mentioned in metrology? Note that historic barriers between metrology and the reliability of industrial structures has prevented this question from easily emerging. More deeply, the problem arises essentially because we are interested, in our study of structural reliability, in rare "failure events", that is to say the tail-end of the Z distribution. In addition, such events are often associated with a pre-existing system G(.) that takes far longer to calculate than a closed-form metrology formula. In this context the Gaussian and linear hypotheses associated with Taylor quadratic approximations often prove false, while MCS requires too large a number of simulations to stabilise the estimator of Pf.
60Aside from the use of methods to reduce the variance of MCS (by reduction of the conditional dimension of MCS, importance sampling etc.), calculation strategies were historically developed specifically to evaluate the probability of exceeding a threshold, and were known in reliability research as First (or Second) Order Reliability Methods, (FORM or SORM). If the underlying approximations allow, FORM and SORM considerably reduce the computational load required to evaluate a very weak Pf. These methods also generate importance factors that rank sources of uncertainty: unlike the metrological example, they are ranked with regard to a quantity of interest of threshold exceedance instead of with regard to the average uncertainty (the coefficient of variation) of z. Yet, handling quantities of interest such as rare probabilities on complex system models remains a domain of challenging mathematical research both from the points of view of uncertainty propagation and sensitivity analysis.
- 4 No conclusions may be drawn out of this paper regarding the real flood prediction methods nor the f (...)
61The final example has been developed as a realistic benchmarking and educational example for uncertainty modeling throughout the French industry (de Rocquigny, 2006). While not being used as such in real studies in the particular domain of flood risk4, it was designed to incorporate several aspects that are typical for risk analyses in various domains. The quantity of interest is the probability to surpass a threshold and information availability is typical for industrial studies in the sense that input variables are affected both by aleatory and epistemic uncertainty; although virtual and data available only on some of the uncertain inputs, with heterogeneous amount and quality. The pre-existing system is in this case a residential area bordered by a dyke protecting from uncertain flood events.
62Two main output variable of interest are involved. Safety-wise, the key variable is the overspill (Figure 5) caused by the rise s=zc-zd in water-level (zc) above the dyke crest (zd). A hydraulic model relates it to a number of features undergoing both natural meteorological and geo-morphological variability and lack of knowledge: river flow (q), riverbed elevations (up- and down-stream: zm and zv) and the state of the river, typically represented through Strickler's friction coefficient ks. Another variable is important in an economic perspective: the complete cost cc aggregating both the investment cost ci depending on the control variable d = zd (dike height) and the cost of damage cd if there is an overspill. A related financial model may itself incorporate a new source of uncertainties representing the cost of any given overspill cm, usually quite unpredictable to some extent because of both the variability of land-use and vulnerability in time (beyond what may be caught through likely scenarios) and the limitations of records of flood consequences.
Figure 5 – Flood risk model
63Essential stakes may concern either safety or financial optimisation, leading to two different final goals and quantity of interest:
-
Comply – ensure that the risk of dike overspill is below a prescribed level. Quantity of interest is then an exceedance probability, the decision criterion being a maximal probability.
-
Select – fit the design in order to optimize the complete cost cc. The quantity of interest being either the expected value of the cost, generally a rather risky approach, or the probability of catastrophic damage (e.g.cc exceeds a solvency threshold) or a utility-based quantity of interest representing risk aversion
64Several approaches to uncertainty assessment are thus possible for one or other of these final goals (see Table 1). For the first type of goal, several combinations of aleatory and epistemic levels of uncertainty may be pursued. The random character of flood flows being widely-reckoned, all approaches share the prior requirement of statistical treatment of data using extreme value theory to estimate a flow parameter model: this includes estimating a reference quantile (typically 1% or 0.1% i.e. 1/100 or 1/1000 year return period) but also the associated confidence interval which embodies the epistemic uncertainty generated by dataset limitations to some extent, although the uncertainty arising in the choice of the proper distribution shape is much harder to circumscribe. After which, the possible approaches are roughly as follows:
-
The so-called "deterministic" approach: this is based on an upper limit for the confidence interval for estimating the conventional flow quantile (e.g. millennial at 70%) plus the same type of penalization as mentioned above in structural reliability for the remaining uncertain inputs. As was the case in the mechanical example, the big advantage is that a single physical calculation is necessary to infer the quantity of interest, namely a penalized water level for the 70%-upper millennial flood : yet it is hard to fully interpret the risk level associated to such a composite quantity of interest, and it is not possible to retrieve any information on .
-
The mixed "probabilistic on quantile" approach. All uncertain inputs other than the flow are represented probabilistically at the same level as the epistemic uncertainty around the millennial flow, and MCS is used to estimate the quantity of interest, a level-2 probabilistic figure: namely an upper confidence on the water level for the millennial flood. The Taylor quadratic approximation involving only a few computations is often sufficient: this approach is more comprehensively probabilistic, yet remains a rather complex double-level one.
-
The "direct probabilistic" approach: in this last approach, all types of uncertainties, whether considering the flow randomness, the uncertain state of the river or the uncertain parameters of the extreme value distribution represented are sampled together in a single level setting, and MCS is necessary to estimate a millennial quantile for z, i.e. the water level returned once every 1000 years (in average) instead of the water level occurring for the flood flow returned once every 1000 years. Many calculations are required in that last approach, but it leads to a single-level quantity of interest (a sort of “average flood risk”) and is more amenable to be used to meet the second type of goal “SELECT”.
Table 1: Various possible approaches to uncertainty treatment of flood levels
65It is not the intent of the present paper to discuss the appropriateness of one or the other approach regarding flood risk in particular, an issue deserving deeper domain-specific and regulatory considerations. Here let us simply note that, aside from the difference in calculation time and the complexity of the studies involved, these approaches illustrate in fact different choices in the quantity of interest (more or less probabilistic) and in the choice to represent separately or not the aleatory and epistemic features. The first two approaches explicitly separate the randomness (or “risk”) expressed by the millennial flow quantile from the lack of knowledge in the state of the river or the limitations of the flow statistical estimation (or “uncertainty”)), the latter being treated either by deterministic penalisation (1st approach), or by aggregation of probabilities (2nd approach). The third approach mixes altogether any aleatory or epistemic type into an fully probabilistic average. Yet all three illustrate the same overall methodological approach introduced in §3.
66To conclude this section, the following Table 2 summarises the various characteristics illustrated in the three examples, all within the generic methodological approach in spite of their diversity.
Table 2: Summary of the examples
|
Metrology of CO2 emissions
|
Structural reliability of a nuclear vessel
|
Flood risk
|
Sources of uncertainty
|
Metrological errors
Variability of operating conditions
|
Variability of material properties
Randomness of accidental transients
Lack of knowledge of fracture mechanical features
|
Flow Randomness and epistemic uncertainty in the extreme value distribution
Lack of knowledge and natural variability of river bed and friction conditions
|
Final goal of the uncertainty study
|
Accredit measurement result and comply with metrological criterion
Understand and reduce main sources of uncertainty
|
Certify a safety criterion
Understand and rank sources of uncertainties with respect to failure threshold exceedance
|
Comply: flood protection up to a regulatory level
Select design to optimize the complete cost
|
Modeling paradigm
|
Generally a standard probabilistic, possibly with a
deterministic second-level
|
Generally a mixed deterministic-probabilistic setting
|
Risk and uncertainty separated or not in a double or single-level probabilistic or mixed setting
|
Challenges
|
Sensor calibration heterogeneity
Dataset sizes
|
Acceptability of deterministic vs. probabilistic settings.
Computational and mathematical complexity in handling rare probabilities
|
Scarcity of data to control extreme event distributions.
Complexity to handle double probabilistic criteria.
|
67This last section will discuss a number of open challenges to the implementation of uncertainty treatment, once a generic framework is made available. These should be seen as applying both to industrial or environmental fields within which there is already some experience in uncertainty treatment, as nuclear safety or waste management (think of the Yucca Mountain nuclear waste facility, for which the performance assessment involves more considerable effort to represent uncertainty), or to fields where practice is more recent. Climate change may be seen as a frontier case in that respect, for which each of the three challenges discussed hereafter, mobilizing information on uncertainty sources, numerically treating uncertainty, and building acceptable representations along a precautionary principle, will take acute dimensions.
68A key practical challenge to the implementation of uncertainty studies regards the quantification of the sources of uncertainties, or uncertainty modeling: as mentioned above: this refers to the issue of choosing, in an accountable manner, statistical models (mostly cdf) for the uncertain inputs. Needless to say, the relevance and significance of the entire uncertainty study relies upon the quality of such an input uncertainty model. Key difficulties arise with the highly-limited sampling information directly available on uncertain input variables in real-world industrial cases: real samples in water management or environmental studies, if any, fall often below critical sizes required for stable statistical estimation.
69The classical backup is to involve expertise and choose directly the uncertainty distributions in such a more or less formalized way. A few historical examples of large-scale nuclear waste or environmental impact assessments did imply a structured elicitation of expertise (Granger Morgan & Henrion 1990; Cooke 2001) including prior training and calibration steps in order to retrieve the expertise in the non-trivial form of probabilistic distributions, quantiles etc. and possibly organize posterior consensus building and feedback steps to reconcile diverging expertise or field feedback. Nevertheless, most publications poorly detail the underlying process or openly acknowledge deliberate decisions such as: “friction coefficient uncertain was deemed Gaussian, with a coefficient of variation taken at 10% as a reasonable figure” or possibly “comparing three choices e.g. 5%, 10% or 30% for the coefficient of variation”. The practical selection of simple models such as uniform or Gaussian cdfs between physically-plausible bounds is sometimes justified on grounds of the maximal entropy principle (Cover & Thomas 1990), although this suffers from a number of paradoxes. Why choose a uniform model for Xi when the physical model involves (Xi)² later on, a variable whose distribution which is not uniform anymore ?
70Another tempting strategy is to integrate indirect information, i.e. data on other more easily observable parameters that are linkable to the uncertain variables of interest through a physical model. Indeed, flood monitoring generates data on maximal water elevations or velocities rather than on uncertain friction coefficient or riverbed topology. Run-off coefficients, and Strickler or Manning friction coefficients are typical examples in the hydrological or hydraulic domain of uncertain inputs for which no direct data is made available, although rainfall-flow, stage-discharge or stage line curves could provide indirect data to be calibrated against. This approach, which involves the inversion of a physical model to transform the indirect information, is intimately connected to classical data assimilation, parameter identification model calibration or updating techniques, although inverse uncertainty identification has some distinctive features: it regards the way uncertainty sources are conceptually acknowledged and mathematically modelled on unknown model parameters or model uncertainty.
71While inverse probabilistic techniques are already old (Beck & Arnold 1977; Tarantola, 1987), it may not be until quite recently (Kurowicka & Cooke 2002) that full probabilistic inversion was considered, in the sense that the distribution of intrinsic (or irreducible, aleatory) input uncertainty is searched. Classical data assimilation (Talagrand, 1997) or parameter identification techniques (Beck 1987; Walter & Pronzato 1994) involve the estimation of input parameters (or initial conditions, for example in meteorology) that are unknown but physically fixed; this is naturally accompanied by estimation uncertainty for which variance may be computed. However such an estimation uncertainty happens to be purely epistemic or reducible which is not satisfactory in the case of intrinsically uncertain or variable physical systems, i.e. for which the input values not only suffer from lack of knowledge, but also vary from one flood event to another. On this relatively new research field, a number of new algorithms (Celeux et al 2007; de Rocquigny & Cambier 2008) have been developed as extensions or alternatives of the classical algorithms (mostly Gaussian linear, e.g. (de Crécy 1997).
72Uncertainty study inevitably leads to a number of calls to the code for the physical model that is much larger than for the traditional "best-estimate" study (a single "penalised" calculation). We have seen how, in a strong deterministic paradigm, the maximisation of the response for uncertain domains implies numerous optimisation calculations; in a simple probabilistic paradigm, even with accelerated methods, several dozens or hundreds of calculations are necessary at least. The following contexts are even greedier (>103 to 105 calculations): (i) the "mixed deterministic-probabilistic" paradigm, (ii) optimisation under uncertainty, also known as stochastic optimization or (iii) inverse probabilistic modeling of the sources of uncertainty. In the former case, computational greediness is associated to the need of nesting a maximisation by intervals for the deterministic components with, for each point, a conditional probabilistic calculation for the probabilised variables. In the latter two cases, nesting optimization algorithms with propagation by sampling is required in the general (non-linear, non-Gaussian) case, generating a large computational cost.
73Handling uncertainty is hence a great client of high-performance computing (HPC), a domain into which industrial players are gradually joining the traditional academic champions, as shown by the halls of fame of supercomputing (www.top500.org). Aside from the internal optimisation of the code solvers themselves and their parallelisation, the numerical challenges posed by large-scale uncertainty treatment, depending on the propagation methods adopted, may indeed benefit from massively distributed computing. Monte-Carlo Sampling is a trivial recipient for computer distribution, and indeed may be viewed as the very historical origin of computing remembering that Von Neumann’s ENIAC machine in 1946 was essentially designed for Monte-Carlo neutronics. Beyond simple Monte-Carlo, accelerated sampling (e.g. through LHS, stratification, importance sampling etc., see (Rubinstein & Kroese, 2007)) is a largely-disseminated practice. Like the other advanced uncertainty propagation algorithms mentioned in §3.4.2, they may require numerical development in order to make full advantage of parallel computing, or acceleration through more or less automated code differentiation to benefit from the gradients, all of which represent an area of large research potential for computer science.
74The generic framework set out above has shown that one can technically represent a wide variety of sources of uncertainty in an industrial study and control their impact according to the output variable selected. Everything nevertheless depends on the quantity of interest fixed for the study, the paradigm chosen, for example "mixed deterministic-probabilistic", and the more or less probabilistic treatment devoted to the different sources of uncertainty, i.e. Step A in the framework.
75We should note here that the establishment of the quantity of interest or risk measure cZ can naturally be linked to a modeling of the decider's preferences with regard to a decision theory: a decision rule based on the expected utility value of the selected output variable can be implemented to model the decider's risk aversion (Von Neumann & Morgenstern, 1944). More generally, a decision rule with non-linear probability (that is to say, based on the subjective transformation of probabilities by the agents) equally constitutes a generalised representation of the risk aversion involved (Quiggin, 1982). In the case of probabilities judged to be imprecise, this transformation can be interpreted as an aversion to uncertainty on the distribution of probabilities. While the calculation of the criterion cZ is more sophisticated, because the functional on the measure of Z is more complex, that does not structurally change the global framework defined above.
76When contemplating the regulation of major and rare risks, this step may appear closely linked to a certain vision of the risk and its social acceptability, which links up with the recent debate on the "precautionary principle" (Dupuy, 2002). The mode of treatment of uncertainties is in effect a subject for discussion, depending on the level of subjective uncertainty or cognitive uncertainty, when the consequences are long term or serious, for example when it is a question of the storage of wastes with a long life, or climate change. Accepting a probabilistic treatment when the uncertain occurrence will only occur once, and in a manner that is not observable ex ante for the deciders, is a delicate matter: for example, one might prefer a min-max approach that minimizes the damage associated with the worst case, which can be made to correspond, in our industrial framework, with the assignation of deterministic intervals for serious sources of uncertainty in a mixed deterministic-probabilistic approach: which is, nevertheless, a very burdensome approach.
- 5 See the commentary by J.P. Dupuy (Dupuy, 2002) on the fact that at a certain stage uncertainty is n (...)
77Even if the industrial framework proposed above appears "neutral" to some extent, insofar as it allows mathematical expression of a varied range of choices, there is of course a limit to its application, which would be reserved for uncertainty not exceeding a certain extent for a quantitative treatment to remain meaningful and usable 5.
78The goal of this paper was to discuss the emergence of a generic framework for the apprehension of uncertainties, in spite of the epistemological controversies and historic barriers between the industrial fields involved (metrology, reliability, statistics, numerical analysis, etc.). The principle followed is to pick out the main steps, such as the quantification or modeling of the sources of uncertainty (Step B), possibly requiring inverse modeling or validation (Step B’), followed by their propagation through a model of the pre-existing industrial system (Step C), and the importance ranking (Step C’) or optimisation resulting. This progression leads to the examination of methods and selection of the most relevant, those which closely associate the applied mathematics involved and the analysis of the physical system, with the emphasis on the problem to be resolved, which itself depends on the type of regulation concerned or the decision criterion chosen (Step A), and not on the specific industrial or phenomenological domain. Furthermore, while this global approach shows that other mathematical paradigms are possible, the mixed deterministic-probabilistic framework appears to play a central role in current industrial applications, giving rise to numerous problems of statistical modeling and of scientific computing.
79In terms of the industrial implementation of calculation methods (estimation of sources, propagation, importance ranking or sensitivity analysis in a large model of the pre-existing system etc.), the global approach shows that there is not one single answer but rather a portfolio of methods to be managed in the proposed global framework, with a view to meeting one or other of the criteria set for an industrial study of uncertainties. This suggests that, in preference to the implementation of a single research field, applications should be oriented towards capitalizing on all the various numerical and statistical algorithms available, made transparent in order to increase public accountability on the key uncertainty and risk issues, which is the whole object of the development of open source initiatives. Beyond, several directions for research remain to be explored. Aside from the traditional challenge of propagation and importance ranking associated with uncertainty and sensitivity analysis, which remains of major importance, it is worth noting that the quantification (or modeling) of the sources by the inverse method (Step B’) remains little used in spite of its very high industrial potential. This is especially so when it is linked to the delicate but promising area of the evaluation of expert judgment, or the question of modeling dependences (also in Step B).
80Naturally an essential point concerns the underlying cultural development, both in industrial teams and within the regulatory framework imposed on them. Over and above efforts to provide cross-cultural training, the question rapidly becomes one of the acceptability, precaution and legibility associated with regulatory criteria, a debate lying at the heart of the society's attitude to risk. These questions remain open, in particular as regards the delicate choices of differentiation and aggregation of uncertainties according to natures, joined with the global conceptual apprehension of risks when considering issues of such public significance as industrial safety, natural risk or environmental impact.
- 6 There would no doubt be a certain logical paradox in any attempt to pretend being able to do so.
81Commonsense should naturally be exercised in the interpretation of the results of an uncertainty study which can never pretend to circumscribe completely all the uncertainties affecting an attempt to model real problems6, and in particular when there are important industrial outcomes at stake. At the very least, recent applied research and industrial experience leads us to think that by systematically encouraging people to question hypotheses (in step B) and to test numerous alternative calculations (in step C), a study of uncertainties, even if its ambition appears presumptuous, will always be more reliable than a straightforward deterministic approach whose justification resides in the impossibility of giving complete confidence to quantitative uncertainty modeling.
Acknowledgements go to the co-editors and authors of the book “Uncertainty in Industrial Practice – A guide to quantitative uncertainty management” standing in close link to this paper.