Infogap decision theory

Infogap decision theory is a nonprobabilistic decision theory that seeks to optimize robustness to failure – or opportuneness for windfall – under severe uncertainty,^{[1]}^{[2]} in particular applying sensitivity analysis of the stability radius type^{[3]} to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle.
It has been developed since the 1980s by Yakov BenHaim,^{[4]} and has found many applications and described as a theory for decisionmaking under "severe uncertainty". It has been criticized as unsuited for this purpose, and alternatives proposed, including such classical approaches as robust optimization.
Contents
 1 Summary
 2 Basic example: budget
 3 Motivation
 4 Example: resource allocation
 5 Uncertainty models
 6 Robustness and opportuneness
 7 Decision rules
 8 Applications
 9 Limitations
 10 Criticism
 11 Discussion
 12 Alternatives
 13 Classical decision theory perspective
 14 See also
 15 External links
 16 Notes
 17 References
Summary
Infogap is a decision theory: it seeks to assist in decisionmaking under uncertainty. It does this by using 3 models, each of which builds on the last. One begins with a model for the situation, where some parameter or parameters are unknown. One then takes an estimate for the parameter, which is assumed to be substantially wrong, and one analyzes how sensitive the outcomes under the model are to the error in this estimate.
 Uncertainty model
 Starting from the estimate, an uncertainty model measures how distant other values of the parameter are from the estimate: as uncertainty increases, the set of possible values increase – if one is this uncertain in the estimate, what other parameters are possible?
 Robustness/opportuneness model
 Given an uncertainty model and a minimum level of desired outcome, then for each decision, how uncertain can you be and be assured achieving this minimum level? (This is called the robustness of the decision.) Conversely, given a desired windfall outcome, how uncertain must you be for this desirable outcome to be possible? (This is called the opportuneness of the decision.)
 Decisionmaking model
 To decide, one optimizes either the robustness or the opportuneness, on the basis of the robustness or opportuneness model. Given a desired minimum outcome, which decision is most robust (can stand the most uncertainty) and still give the desired outcome (the robustsatisficing action)? Alternatively, given a desired windfall outcome, which decision requires the least uncertainty for the outcome to be achievable (the opportunewindfalling action)?
Models
Infogap theory models uncertainty (the horizon of uncertainty) as nested subsets around a point estimate of a parameter: with no uncertainty, the estimate is correct, and as uncertainty increases, the subset grows, in general without bound. The subsets quantify uncertainty – the horizon of uncertainty measures the "distance" between an estimate and a possibility – providing an intermediate measure between a single point (the point estimate) and the universe of all possibilities, and giving a measure for sensitivity analysis: how uncertain can an estimate be and a decision (based on this incorrect estimate) still yield an acceptable outcome – what is the margin of error?
Infogap is a local decision theory, beginning with an estimate and considering deviations from it; this contrasts with global methods such as minimax, which considers worstcase analysis over the entire space of outcomes, and probabilistic decision theory, which considers all possible outcomes, and assigns some probability to them. In infogap, the universe of possible outcomes under consideration is the union of all of the nested subsets:
Infogap analysis gives answers to such questions as:
 under what level of uncertainty can specific requirements be reliably assured (robustness), and
 what level of uncertainty is necessary to achieve certain windfalls (opportuneness).
It can be used for satisficing, as an alternative to optimizing in the presence of uncertainty or bounded rationality; see robust optimization for an alternative approach.
Comparison with classical decision theory
In contrast to probabilistic decision theory, infogap analysis does not use probability distributions: it measures the deviation of errors (differences between the parameter and the estimate), but not the probability of outcomes – in particular, the estimate is in no sense more or less likely than other points, as infogap does not use probability. Infogap, by not using probability distributions, is robust in that it is not sensitive to assumptions on probabilities of outcomes. However, the model of uncertainty does include a notion of "closer" and "more distant" outcomes, and thus includes some assumptions, and is not as robust as simply considering all possible outcomes, as in minimax. Further, it considers a fixed universe so it is not robust to unexpected (not modeled) events.
The connection to minimax analysis has occasioned some controversy: (BenHaim 1999, pp. 271–2) argues that infogap's robustness analysis, while similar in some ways, is not minimax worstcase analysis, as it does not evaluate decisions over all possible outcomes, while (Sniedovich, 2007) argues that the robustness analysis can be seen as an example of maximin (not minimax), applied to maximizing the horizon of uncertainty. This is discussed in criticism, below, and elaborated in the classical decision theory perspective.
Basic example: budget
As a simple example, consider a worker with uncertain income. They expect to make $100 per week, while if they make under $60 they will be unable to afford lodging and will sleep in the street, and if they make over $150 they will be able to afford a night's entertainment.
Using the infogap absolute error model:
where one would conclude that the worker's robustness function is $40, and their opportuneness function is $50: if they are certain that they will make $100, they will neither sleep in the street nor feast, and likewise if they make within $40 of $100. However, if they erred in their estimate by more than $40, they may find themselves on the street, while if they erred by more than $50, they may find themselves dining in opulence.
As stated, this example is only descriptive, and does not enable any decision making – in applications, one considers alternative decision rules, and often situations with more complex uncertainty.
Consider now the worker thinking of moving to a different town, where the work pays less but lodgings are cheaper. Say that here they estimate that they will earn $80 per week, but lodging only costs $44, while entertainment still costs $150. In that case the robustness function will be $36, while the opportuneness function will be $70. If they make the same errors in both cases, the second case (moving) is both less robust and less opportune.
On the other hand, if one measures uncertainty by relative error, using the fractional error model:
in the first case robustness is 40% and opportuneness is 50%, while in the second case robustness is 45% and opportuneness is 87.5%, so moving is more robust and less opportune.
This example demonstrates the sensitivity of analysis to the model of uncertainty.
Infogap models
Infogap can be applied to spaces of functions; in that case the uncertain parameter is a function with estimate and the nested subsets are sets of functions. One way to describe such a set of functions is by requiring values of u to be close to values of for all x, using a family of infogap models on the values.
For example, the above fraction error model for values becomes the fractional error model for functions by adding a parameter x to the definition:
More generally, if is a family of infogap models of values, then one obtains an infogap model of functions in the same way:
Motivation
It is common to make decisions under uncertainty.^{[note 1]} What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Infogap robustness analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet "guarantee" acceptable performance? In everyday terms, the "robustness" of a decision is set by the size of deviation from an estimate that still leads to performance within requirements when using that decision. It is sometimes difficult to judge how much robustness is needed or sufficient. However, according to infogap theory, the ranking of feasible decisions in terms of their degree of robustness is independent of such judgments.
Infogap theory also proposes an opportuneness function which evaluates the potential for windfall outcomes resulting from favorable uncertainty.
Example: resource allocation
Here is an illustrative example, which will introduce the basic concepts of information gap theory. More rigorous description and discussion follows.
Resource allocation
Suppose you are a project manager, supervising two teams: red team and blue team. Each of the teams will yield some revenue at the end of the year. This revenue depends on the investment in the team – higher investments will yield higher revenues. You have a limited amount of resources, and you wish to decide how to allocate these resources between the two groups, so that the total revenues of the project will be as high as possible.
If you have an estimate of the correlation between the investment in the teams and their revenues, as illustrated in Figure 1, you can also estimate the total revenue as a function of the allocation. This is exemplified in Figure 2 – the lefthand side of the graph corresponds to allocating all resources to the red team, while the righthand side of the graph corresponds to allocating all resources to the blue team. A simple optimization will reveal the optimal allocation – the allocation that, under your estimate of the revenue functions, will yield the highest revenue.
Introducing uncertainty
However, this analysis does not take uncertainty into account. Since the revenue functions are only a (possibly rough) estimate, the actual revenue functions may be quite different. For any level of uncertainty (or horizon of uncertainty) we can define an envelope within which we assume the actual revenue functions are. Higher uncertainty would correspond to a more inclusive envelope. Two of these uncertainty envelopes, surrounding the revenue function of the red team, are represented in Figure 3. As illustrated in Figure 4, the actual revenue function may be any function within a given uncertainty envelope. Of course, some instances of the revenue functions are only possible when the uncertainty is high, while small deviations from the estimate are possible even when the uncertainty is small.
These envelopes are called infogap models of uncertainty, since they describe one's understanding of the uncertainty surrounding the revenue functions.
From the infogap models (or uncertainty envelopes) of the revenue functions, we can determine an infogap model for the total amount of revenues. Figure 5 illustrates two of the uncertainty envelopes defined by the infogap model of the total amount of revenues.
Robustness
High revenues would typically earn a project manager the senior management's respect, but if the total revenues are below a certain threshold, it will cost said project manager's job. We will define such a threshold as a critical revenue, since total revenues beneath the critical revenue will be considered as failure.
For any given allocation, the robustness of the allocation, with respect to the critical revenue, is the maximal uncertainty that will still guarantee that the total revenue will exceed the critical revenue. This is demonstrated in Figure 6. If the uncertainty will increase, the envelope of uncertainty will become more inclusive, to include instances of the total revenue function that, for the specific allocation, yields a revenue smaller than the critical revenue.
The robustness measures the immunity of a decision to failure. A robust satisficer is a decision maker that prefers choices with higher robustness.
If, for some allocation , the correlation between the critical revenue and the robustness is illustrated, the result is a graph somewhat similar to that in Figure 7. This graph, called robustness curve of allocation , has two important features, that are common to (most) robustness curves:
 The curve is nonincreasing. This captures the notion that when higher requirements (higher critical revenue) are in place, failure to meet the target is more likely (lower robustness). This is the tradeoff between quality and robustness.
 At the nominal revenue, that is, when the critical revenue equals the revenue under the nominal model (the estimate of the revenue functions), the robustness is zero. This is since a slight deviation from the estimate may decrease the total revenue.
If the robustness curves of two allocations, and are compared, the fact that the two curves will intersect is noticeable, as illustrated in Figure 8. In this case, none of the allocations is strictly more robust than the other: for critical revenues smaller than the crossing point, allocation is more robust than allocation , while the other way around holds for critical revenues higher than the crossing point. That is, the preference between the two allocations depends on the criterion of failure – the critical revenue.
Opportuneness
Suppose, in addition to the threat of losing your job, the senior management offers you a carrot: if the revenues are higher than some revenue, you will be awarded a considerable bonus. Although revenues lower than this revenue will not be considered to be a failure (as you may still keep your job), a higher revenue will be considered a windfall success. We will therefore denote this threshold by windfall revenue.
For any given allocation, the opportuneness of the allocation, with respect to the critical revenue, is the minimal uncertainty for which it is possible for the total revenue to exceed the critical revenue. This is demonstrated in Figure 9. If the uncertainty will decrease, the envelope of uncertainty will become less inclusive, to exclude all instances of the total revenue function that, for the specific allocation, yields a revenue higher than the windfall revenue.
The opportuneness may be considered as the immunity to windfall success. Therefore, lower opportuneness is preferred to higher opportuneness.
If, for some allocation , we will illustrate the correlation between the windfall revenue and the robustness, we will have a graph somewhat similar to Figure 10. This graph, called opportuneness curve of allocation , has two important features, that are common to (most) opportuneness curves:
 The curve is nondecreasing. This captures the notion that when we have higher requirements (higher windfall revenue), we are more immune to failure (higher opportuneness, which is less desirable). That is, we need a more substantial deviation from the estimate in order to achieve our ambitious goal. This is the tradeoff between quality and opportuneness.
 At the nominal revenue, that is, when the critical revenue equals the revenue under the nominal model (our estimate of the revenue functions), the opportuneness is zero. This is since no deviation from the estimate is needed in order to achieve the windfall revenue.
Treatment of severe uncertainty
The logic underlying the above illustration is that the (unknown) true revenue is somewhere in the immediate neighborhood of the (known) estimate of the revenue. For if this is not the case, what is the point of conducting the analysis exclusively in this neighborhood?
Therefore, to remind ourselves that infogap's manifest objective is to seek robust solutions for problems that are subject to severe uncertainty, it is instructive to exhibit in the display of the results also those associated with the true value of the revenue. Of course, given the severity of the uncertainty we do not know the true value.
What we do know, however, is that according to our working assumptions the estimate we have is a poor indication of the true value of the revenue and is likely to be substantially wrong. So, methodologically speaking, we have to display the true value at a distance from its estimate. In fact, it would be even more enlightening to display a number of possible true values .
In short, methodolocially speaking the picture is this:
Note that in addition to the results generated by the estimate, two "possible" true values of the revenue are also displayed at a distance from the estimate.
As indicated by the picture, since infogap robustness model applies its Maximin analysis in an immediate neighborhood of the estimate, there is no assurance that the analysis is in fact conducted in the neighborhood of the true value of the revenue. In fact, under conditions of severe uncertainty this—methodologically speaking—is very unlikely.
This raises the question: how valid/useful/meaningful are the results? Aren't we sweeping the severity of the uncertainty under the carpet?
For example, suppose that a given allocation is found to be very fragile in the neighborhood of the estimate. Does this means that this allocation is also fragile elsewhere in the region of uncertainty? Conversely, what guarantee is there that an allocation that is robust in the neighborhood of the estimate is also robust elsewhere in the region of uncertainty, indeed in the neighborhood of the true value of the revenue?
More fundamentally, given that the results generated by infogap are based on a local revenue/allocation analysis in the neighborhood of an estimate that is likely to be substantially wrong, we have no other choice—methodologically speaking—but to assume that the results generated by this analysis are equally likely to be substantially wrong. In other words, in accordance with the universal Garbage In  Garbage Out Axiom, we have to assume that the quality of the results generated by infogap's analysis is only as good as the quality of the estimate on which the results are based.
The picture speaks for itself.
What emerges then is that infogap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications.
A more detailed analysis of an illustrative numerical investment problem of this type can be found in Sniedovich (2007).
Uncertainty models
Infogaps are quantified by infogap models of uncertainty. An infogap model is an unbounded family of nested sets. For example, a frequently encountered example is a family of nested ellipsoids all having the same shape. The structure of the sets in an infogap model derives from the information about the uncertainty. In general terms, the structure of an infogap model of uncertainty is chosen to define the smallest or strictest family of sets whose elements are consistent with the prior information. Since there is, usually, no known worst case, the family of sets may be unbounded.
A common example of an infogap model is the fractional error model. The best estimate of an uncertain function is , but the fractional error of this estimate is unknown. The following unbounded family of nested sets of functions is a fractionalerror infogap model:
At any horizon of uncertainty , the set contains all functions whose fractional deviation from is no greater than . However, the horizon of uncertainty is unknown, so the infogap model is an unbounded family of sets, and there is no worst case or greatest deviation.
There are many other types of infogap models of uncertainty. All infogap models obey two basic axioms:
 Nesting. The infogap model is nested if implies that:
 Contraction. The infogap model is a singleton set containing its center point:
The nesting axiom imposes the property of "clustering" which is characteristic of infogap uncertainty. Furthermore, the nesting axiom implies that the uncertainty sets become more inclusive as grows, thus endowing with its meaning as an horizon of uncertainty. The contraction axiom implies that, at horizon of uncertainty zero, the estimate is correct.
Recall that the uncertain element may be a parameter, vector, function or set. The infogap model is then an unbounded family of nested sets of parameters, vectors, functions or sets.
Sublevel sets
For a fixed point estimate an infogap model is often equivalent to a function defined as:
meaning "the uncertainty of a point u is the minimum uncertainty such that u is in the set with that uncertainty". In this case, the family of sets can be recovered as the sublevel sets of :
meaning: "the nested subset with horizon of uncertainty consists of all points with uncertainty less than or equal to ".
Conversely, given a function satisfying the axiom (equivalently, if and only if ), it defines an infogap model via the sublevel sets.
For instance, if the region of uncertainty is a metric space, then the uncertainty function can simply be the distance, so the nested subsets are simply
This always defines an infogap model, as distances are always nonnegative (axiom of nonnegativity), and satisfies (infogap axiom of contraction) because the distance between two points is zero if and only if they are equal (the identity of indiscernibles); nesting follows by construction of sublevel set.
Not all infogap models arise as sublevel sets: for instance, if for all but not for (it has uncertainty "just more" than 1), then the minimum above is not defined; one can replace it by an infimum, but then the resulting sublevel sets will not agree with the infogap model: but The effect of this distinction is very minor, however, as it modifies sets by less than changing the horizon of uncertainty by any positive number however small.
Robustness and opportuneness
Uncertainty may be either pernicious or propitious. That is, uncertain variations may be either adverse or favorable. Adversity entails the possibility of failure, while favorability is the opportunity for sweeping success. Infogap decision theory is based on quantifying these two aspects of uncertainty, and choosing an action which addresses one or the other or both of them simultaneously. The pernicious and propitious aspects of uncertainty are quantified by two "immunity functions": the robustness function expresses the immunity to failure, while the opportuneness function expresses the immunity to windfall gain.
Robustness and opportuneness functions
The robustness function expresses the greatest level of uncertainty at which failure cannot occur; the opportuneness function is the least level of uncertainty which entails the possibility of sweeping success. The robustness and opportuneness functions address, respectively, the pernicious and propitious facets of uncertainty.
Let be a decision vector of parameters such as design variables, time of initiation, model parameters or operational options. We can verbally express the robustness and opportuneness functions as the maximum or minimum of a set of values of the uncertainty parameter of an infogap model:

(robustness) (1a) (opportuneness) (2a)
Formally,

(robustness) (1b) (opportuneness) (2b)
We can "read" eq. (1) as follows. The robustness of decision vector is the greatest value of the horizon of uncertainty for which specified minimal requirements are always satisfied. expresses robustness — the degree of resistance to uncertainty and immunity against failure — so a large value of is desirable. Robustness is defined as a worstcase scenario up to the horizon of uncertainty: how large can the horizon of uncertainty be and still, even in the worst case, achieve the critical level of outcome?
Eq. (2) states that the opportuneness is the least level of uncertainty which must be tolerated in order to enable the possibility of sweeping success as a result of decisions . is the immunity against windfall reward, so a small value of is desirable. A small value of reflects the opportune situation that great reward is possible even in the presence of little ambient uncertainty. Opportuneness is defined as a bestcase scenario up to the horizon of uncertainty: how small can the horizon of uncertainty be and still, in the best case, achieve the windfall reward?
The immunity functions and are complementary and are defined in an antisymmetric sense. Thus "bigger is better" for while "big is bad" for . The immunity functions — robustness and opportuneness — are the basic decision functions in infogap decision theory.
Optimization
The robustness function involves a maximization, but not of the performance or outcome of the decision: in general the outcome could be arbitrarily bad. Rather, it maximizes the level of uncertainty that would be required for the outcome to fail.
The greatest tolerable uncertainty is found at which decision satisfices the performance at a critical survivallevel. One may establish one's preferences among the available actions according to their robustnesses , whereby larger robustness engenders higher preference. In this way the robustness function underlies a satisficing decision algorithm which maximizes the immunity to pernicious uncertainty.
The opportuneness function in eq. (2) involves a minimization, however not, as might be expected, of the damage which can accrue from unknown adverse events. The least horizon of uncertainty is sought at which decision enables (but does not necessarily guarantee) large windfall gain. Unlike the robustness function, the opportuneness function does not satisfice, it "windfalls". Windfalling preferences are those which prefer actions for which the opportuneness function takes a small value. When is used to choose an action , one is "windfalling" by optimizing the opportuneness from propitious uncertainty in an attempt to enable highly ambitious goals or rewards.
Given a scalar reward function , depending on the decision vector and the infogapuncertain function , the minimal requirement in eq. (1) is that the reward be no less than a critical value . Likewise, the sweeping success in eq. (2) is attainment of a "wildest dream" level of reward which is much greater than . Usually neither of these threshold values, and , is chosen irrevocably before performing the decision analysis. Rather, these parameters enable the decision maker to explore a range of options. In any case the windfall reward is greater, usually much greater, than the critical reward :
The robustness and opportuneness functions of eqs. (1) and (2) can now be expressed more explicitly:

(3) (4)
is the greatest level of uncertainty consistent with guaranteed reward no less than the critical reward , while is the least level of uncertainty which must be accepted in order to facilitate (but not guarantee) windfall as great as . The complementary or antisymmetric structure of the immunity functions is evident from eqs. (3) and (4).
These definitions can be modified to handle multicriterion reward functions. Likewise, analogous definitions apply when is a loss rather than a reward.
Decision rules
Based on these function, one can then decided on a course of action by optimizing for uncertainty: choose the decision which is most robust (can withstand the greatest uncertainty; "satisficing"), or choose the decision which requires the least uncertainty to achieve a windfall.
Formally, optimizing for robustness or optimizing for opportuneness yields a preference relation on the set of decisions, and the decision rule is the "optimize with respect to this preference".
In the below, let be the set of all available or feasible decision vectors .
Robustsatisficing
The robustness function generates robustsatisficing preferences on the options: decisions are ranked in increasing order of robustness, for a given critical reward, i.e., by value, meaning if
A robustsatisficing decision is one which maximizes the robustness and satisfices the performance at the critical level .
Denote the maximum robustness by (formally for the maximum robustness for a given critical reward), and the corresponding decision (or decisions) by (formally, the critical optimizing action for a given level of critical reward):
Usually, though not invariably, the robustsatisficing action depends on the critical reward .
Opportunewindfalling
Conversely, one may optimize opportuneness: the opportuneness function generates opportunewindfalling preferences on the options: decisions are ranked in decreasing order of opportuneness, for a given windfall reward, i.e., by value, meaning if
The opportunewindfalling decision, , minimizes the opportuneness function on the set of available decisions.
Denote the minimum opportuneness by (formally for the minimum opportuneness for a given windfall reward), and the corresponding decision (or decisions) by (formally, the windfall optimizing action for a given level of windfall reward):
The two preference rankings, as well as the corresponding the optimal decisions and , may be different, and may vary depending on the values of and
Applications
Infogap theory has generated a lot of literature. Infogap theory has been studied or applied in a range of applications including engineering ^{[5]} ^{[6]} ^{[7]} ^{[8]} ^{[9]} ^{[10]} ^{[11]} ^{[12]} ^{[13]} ^{[14]} ^{[15]} ^{[16]} ,^{[17]}^{[18]} biological conservation ^{[19]} ^{[20]} ^{[21]} ^{[22]} ^{[23]} ^{[24]} ^{[25]} ^{[26]} ^{[27]} ^{[28]} ,^{[29]}^{[30]} theoretical biology,^{[31]} homeland security,^{[32]} economics ^{[33]} ,^{[34]}^{[35]} project management ^{[36]} ^{[37]} ^{[38]} and statistics .^{[39]} Foundational issues related to infogap theory have also been studied ^{[40]} ^{[41]} ^{[42]} ^{[43]} ^{[44]} .^{[45]}
The remainder of this section describes in a little more detail the kind of uncertainties addressed by infogap theory. Although many published works are mentioned below, no attempt is made here to present insights from these papers. The emphasis is not upon elucidation of the concepts of infogap theory, but upon the context where it is used and the goals.
Engineering
A typical engineering application is the vibration analysis of a cracked beam, where the location, size, shape and orientation of the crack is unknown and greatly influence the vibration dynamics.^{[9]} Very little is usually known about these spatial and geometrical uncertainties. The infogap analysis allows one to model these uncertainties, and to determine the degree of robustness  to these uncertainties  of properties such as vibration amplitude, natural frequencies, and natural modes of vibration. Another example is the structural design of a building subject to uncertain loads such as from wind or earthquakes.^{[8]}^{[10]} The response of the structure depends strongly on the spatial and temporal distribution of the loads. However, storms and earthquakes are highly idiosyncratic events, and the interaction between the event and the structure involves very sitespecific mechanical properties which are rarely known. The infogap analysis enables the design of the structure to enhance structural immunity against uncertain deviations from designbase or estimated worstcase loads.^{[citation needed]} Another engineering application involves the design of a neural net for detecting faults in a mechanical system, based on realtime measurements. A major difficulty is that faults are highly idiosyncratic, so that training data for the neural net will tend to differ substantially from data obtained from realtime faults after the net has been trained. The infogap robustness strategy enables one to design the neural net to be robust to the disparity between training data and future real events.^{[11]}^{[13]}
Biology
Biological systems are vastly more complex and subtle than our best models, so the conservation biologist faces substantial infogaps in using biological models. For instance, Levy et al. ^{[19]} use an infogap robustsatisficing "methodology for identifying management alternatives that are robust to environmental uncertainty, but nonetheless meet specified socioeconomic and environmental goals." They use infogap robustness curves to select among management options for sprucebudworm populations in Eastern Canada. Burgman ^{[46]} uses the fact that the robustness curves of different alternatives can intersect, to illustrate a change in preference between conservation strategies for the orangebellied parrot.
Project management
Project management is another area where infogap uncertainty is common. The project manager often has very limited information about the duration and cost of some of the tasks in the project, and infogap robustness can assist in project planning and integration.^{[37]} Financial economics is another area where the future is fraught with surprises, which may be either pernicious or propitious. Infogap robustness and opportuneness analyses can assist in portfolio design, credit rationing, and other applications.^{[33]}
Limitations
In applying infogap theory, one must remain aware of certain limitations.
Firstly, infogap makes assumptions, namely on universe in question, and the degree of uncertainty – the infogap model is a model of degrees of uncertainty or similarity of various assumptions, within a given universe. Infogap does not make probability assumptions within this universe – it is nonprobabilistic – but does quantify a notion of "distance from the estimate". In brief, infogap makes fewer assumptions than a probabilistic method, but does make some assumptions.
Further, unforeseen events (those not in the universe ) are not incorporated: infogap addresses modeled uncertainty, not unexpected uncertainty, as in black swan theory, particularly the ludic fallacy. This is not a problem when the possible events by definition fall in a given universe, but in real world applications, significant events may be "outside model". For instance, a simple model of daily stock market returns – which by definition fall in the range – may include extreme moves such as Black Monday (1987) but might not model the market breakdowns following the September 11 attacks: it considers the "known unknowns", not the "unknown unknowns". This is a general criticism of much decision theory, and is by no means specific to infogap, but infogap is not immune to it.
Secondly, there is no natural scale: is uncertainty of small or large? Different models of uncertainty give different scales, and require judgment and understanding of the domain and the model of uncertainty. Similarly, measuring differences between outcomes requires judgment and understanding of the domain.
Thirdly, if the universe under consideration is larger than a significant horizon of uncertainty, and outcomes for these distant points is significantly different from points near the estimate, then conclusions of robustness or opportuneness analyses will generally be: "one must be very confident of one's assumptions, else outcomes may be expected to vary significantly from projections" – a cautionary conclusion.
Disclaimer and Summary
The robustness and opportuneness functions can inform decision. For example, a change in decision increasing robustness may increase or decrease opportuneness. From a subjective stance, robustness and opportuneness both tradeoff against aspiration for outcome: robustness and opportuneness deteriorate as the decision maker's aspirations increase. Robustness is zero for modelbest anticipated outcomes. Robustness curves for alternative decisions may cross as a function of aspiration, implying reversal of preference.
Various theorems identify conditions where larger infogap robustness implies larger probability of success, regardless of the underlying probability distribution. However, these conditions are technical, and do not translate into any commonsense, verbal recommendations, limiting such applications of infogap theory by nonexperts.
Criticism
A general criticism of nonprobabilistic decision rules, discussed in detail at decision theory: alternatives to probability theory, is that optimal decision rules (formally, admissible decision rules) can always be derived by probabilistic methods, with a suitable utility function and prior distribution (this is the statement of the complete class theorems), and thus that nonprobabilistic methods such as infogap are unnecessary and do not yield new or better decision rules.
A more general criticism of decision making under uncertainty is the impact of outsized, unexpected events, ones that are not captured by the model. This is discussed particularly in black swan theory, and infogap, used in isolation, is vulnerable to this, as are a fortiori all decision theories that use a fixed universe of possibilities, notably probabilistic ones.
In criticism specific to infogap, Sniedovich^{[47]} raises two objections to infogap decision theory, one substantive, one scholarly:
 1. the infogap uncertainty model is flawed and oversold
 Infogap models uncertainty via a nested family of subsets around a point estimate, and is touted as applicable under situations of "severe uncertainty". Sniedovich argues that under severe uncertainty, one should not start from a point estimate, which is likely to be seriously flawed. Instead, one should consider the universe of possibilities, not its subsets. Stated alternatively, under severe uncertainty, one should use global decision theory (consider the entire region of uncertainty), not local decision theory (starting with a point estimate and considering deviations from it). Sniedovich argues that infogap decision theory is therefore a "voodoo decision theory."
 2. infogap is maximin
 BenHaim (2006, p.xii) claims that infogap is "radically different from all current theories of decision under uncertainty," while Sniedovich argues that infogap's robustness analysis is precisely maximin analysis of the horizon of uncertainty. By contrast, BenHaim states (BenHaim 1999, pp. 271–2) that "robust reliability is emphatically not a [minmax] worstcase analysis". Note that BenHaim compares infogap to minimax, while Sniedovich considers it a case of maximin.
Sniedovich has challenged the validity of infogap theory for making decisions under severe uncertainty. He questions the effectiveness of infogap theory in situations where the best estimate is a poor indication of the true value of . Sniedovich notes that the infogap robustness function is "local" to the region around , where is likely to be substantially in error. He concludes that therefore the infogap robustness function is an unreliable assessment of immunity to error.
Maximin
Sniedovich argues that infogap's robustness model is maximin analysis of, not the outcome, but the horizon of uncertainty: it chooses an estimate such that one maximizes the horizon of uncertainty such that the minimal (critical) outcome is achieved, assuming worstcase outcome for a particular horizon. Symbolically, max assuming min (worstcase) outcome, or maximin.
In other words, while it is not a maximin analysis of outcome over the universe of uncertainty, it is a maximin analysis over a properly construed decision space.
BenHaim argues that infogap's robustness model is not minmax/maximin analysis because it is not worst case analysis of outcomes; it is a satisficing model, not an optimization model – a (straightforward) maximin analysis would consider worstcase outcomes over the entire space which, since uncertainty is often potentially unbounded, would yield an unbounded bad worst case.
Stability radius
Sniedovich^{[3]} has shown that infogap's robustness model is a simple stability radius model, namely a local stability model of the generic form
where denotes a ball of radius centered at and denotes the set of values of that satisfy predetermined stability conditions.
In other words, infogap's robustness model is a stability radius model characterized by a stability requirement of the form . Since stability radius models are designed for the analysis of small perturbations in a given nominal value of a parameter, Sniedovich^{[3]} argues that infogap's robustness model is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space.
Discussion
Satisficing and bounded rationality
It is correct that the infogap robustness function is local, and has restricted quantitative value in some cases. However, a major purpose of decision analysis is to provide focus for subjective judgments. That is, regardless of the formal analysis, a framework for discussion is provided. Without entering into any particular framework, or characteristics of frameworks in general, discussion follows about proposals for such frameworks.
Simon ^{[48]} introduced the idea of bounded rationality. Limitations on knowledge, understanding, and computational capability constrain the ability of decision makers to identify optimal choices. Simon advocated satisficing rather than optimizing: seeking adequate (rather than optimal) outcomes given available resources. Schwartz,^{[49]} Conlisk ^{[50]} and others discuss extensive evidence for the phenomenon of bounded rationality among human decision makers, as well as for the advantages of satisficing when knowledge and understanding are deficient. The infogap robustness function provides a means of implementing a satisficing strategy under bounded rationality. For instance, in discussing bounded rationality and satisficing in conservation and environmental management, Burgman notes that "Infogap theory ... can function sensibly when there are 'severe' knowledge gaps." The infogap robustness and opportuneness functions provide "a formal framework to explore the kinds of speculations that occur intuitively when examining decision options." ^{[51]} Burgman then proceeds to develop an infogap robustsatisficing strategy for protecting the endangered orangebellied parrot. Similarly, Vinot, Cogan and Cipolla ^{[52]} discuss engineering design and note that "the downside of a modelbased analysis lies in the knowledge that the model behavior is only an approximation to the real system behavior. Hence the question of the honest designer: how sensitive is my measure of design success to uncertainties in my system representation? ... It is evident that if modelbased analysis is to be used with any level of confidence then ... [one must] attempt to satisfy an acceptable suboptimal level of performance while remaining maximally robust to the system uncertainties."^{[52]} They proceed to develop an infogap robustsatisficing design procedure for an aerospace application.
Alternatives
Of course, decision in the face of uncertainty is nothing new, and attempts to deal with it have a long history. A number of authors have noted and discussed similarities and differences between infogap robustness and minimax or worstcase methods ^{[7]}^{[16]}^{[35]}^{[37]} ^{[53]} .^{[54]} Sniedovich ^{[47]} has demonstrated formally that the infogap robustness function can be represented as a maximin optimization, and is thus related to Wald's minimax theory. Sniedovich ^{[47]} has claimed that infogap's robustness analysis is conducted in the neighborhood of an estimate that is likely to be substantially wrong, concluding that the resulting robustness function is equally likely to be substantially wrong.
On the other hand, the estimate is the best one has, so it is useful to know if it can err greatly and still yield an acceptable outcome. This critical question clearly raises the issue of whether robustness (as defined by infogap theory) is qualified to judge whether confidence is warranted,^{[5]}^{[55]} ^{[56]} and how it compares to methods used to inform decisions under uncertainty using considerations not limited to the neighborhood of a bad initial guess. Answers to these questions vary with the particular problem at hand. Some general comments follow.
Sensitivity analysis
Sensitivity analysis – how sensitive conclusions are to input assumptions – can be performed independently of a model of uncertainty: most simply, one may take two different assumed values for an input and compares the conclusions. From this perspective, infogap can be seen as a technique of sensitivity analysis, though by no means the only.
Robust optimization
The robust optimization literature ^{[57]}^{[58]}^{[59]}^{[60]}^{[61]}^{[62]} provides methods and techniques that take a global approach to robustness analysis. These methods directly address decision under severe uncertainty, and have been used for this purpose for more than thirty years now. Wald's Maximin model is the main instrument used by these methods.
The principal difference between the Maximin model employed by infogap and the various Maximin models employed by robust optimization methods is in the manner in which the total region of uncertainty is incorporated in the robustness model. Infogap takes a local approach that concentrates on the immediate neighborhood of the estimate. In sharp contrast, robust optimization methods set out to incorporate in the analysis the entire region of uncertainty, or at least an adequate representation thereof. In fact, some of these methods do not even use an estimate.
Comparative analysis
Classical decision theory,^{[63]}^{[64]} offers two approaches to decisionmaking under severe uncertainty, namely maximin and Laplaces' principle of insufficient reason (assume all outcomes equally likely); these may be considered alternative solutions to the problem infogap addresses.
Further, as discussed at decision theory: alternatives to probability theory, probabilists, particularly Bayesians probabilists, argue that optimal decision rules (formally, admissible decision rules) can always be derived by probabilistic methods (this is the statement of the complete class theorems), and thus that nonprobabilistic methods such as infogap are unnecessary and do not yield new or better decision rules.
Maximin
As attested by the rich literature on robust optimization, maximin provides a wide range of methods for decision making in the face of severe uncertainty.
Indeed, as discussed in criticism of infogap decision theory, infogap's robustness model can be interpreted as an instance of the general maximin model.
Bayesian analysis
As for Laplaces' principle of insufficient reason, in this context it is convenient to view it as an instance of Bayesian analysis.
The essence of the Bayesian analysis is applying probabilities for different possible realizations of the uncertain parameters. In the case of Knightian (nonprobabilistic) uncertainty, these probabilities represent the decision maker's "degree of belief" in a specific realization.
In our example, suppose there are only five possible realizations of the uncertain revenue to allocation function. The decision maker believes that the estimated function is the most likely, and that the likelihood decreases as the difference from the estimate increases. Figure 11 exemplifies such a probability distribution.
Now, for any allocation, one can construct a probability distribution of the revenue, based on his prior beliefs. The decision maker can then choose the allocation with the highest expected revenue, with the lowest probability for an unacceptable revenue, etc.
The most problematic step of this analysis is the choice of the realizations probabilities. When there is an extensive and relevant past experience, an expert may use this experience to construct a probability distribution. But even with extensive past experience, when some parameters change, the expert may only be able to estimate that is more likely than , but will not be able to reliably quantify this difference. Furthermore, when conditions change drastically, or when there is no past experience at all, it may prove to be difficult even estimating whether is more likely than .
Nevertheless, methodologically speaking, this difficulty is not as problematic as basing the analysis of a problem subject to severe uncertainty on a single point estimate and its immediate neighborhood, as done by infogap. And what is more, contrary to infogap, this approach is global, rather than local.
Still, it must be stressed that Bayesian analysis does not expressly concern itself with the question of robustness.
It should also be noted that Bayesian analysis raises the issue of learning from experience and adjusting probabilities accordingly. In other words, decision is not a onestop process, but profits from a sequence of decisions and observations.
Classical decision theory perspective
Sniedovich^{[47]} raises two objections to infogap decision theory, from the point of view of classical decision theory, one substantive, one scholarly:
 the infogap uncertainty model is flawed and oversold
 Infogap models uncertainty via a nested family of subsets around a point estimate, and is touted as applicable under situations of "severe uncertainty". Sniedovich argues that under severe uncertainty, one should not start from a point estimate, which is assumed to be seriously flawed: instead the set one should consider is the universe of possibilities, not subsets thereof. Stated alternatively, under severe uncertainty, one should use global decision theory (consider the entire universe), not local decision theory (starting with an estimate and considering deviations from it).
 infogap is maximin
 BenHaim (2006, p.xii) claims that infogap is "radically different from all current theories of decision under uncertainty," while Sniedovich argues that infogap's robustness analysis is precisely maximin analysis of the horizon of uncertainty. By contrast, BenHaim states (BenHaim 1999, pp. 271–2) that "robust reliability is emphatically not a [minmax] worstcase analysis".
Sniedovich has challenged the validity of infogap theory for making decisions under severe uncertainty. He questions the effectiveness of infogap theory in situations where the best estimate is a poor indication of the true value of . Sniedovich notes that the infogap robustness function is "local" to the region around , where is likely to be substantially in error. He concludes that therefore the infogap robustness function is an unreliable assessment of immunity to error.
In the framework of classical decision theory, infogap's robustness model can be construed as an instance of Wald's Maximin model and its opportuneness model is an instance of the classical Minimin model. Both operate in the neighborhood of an estimate of the parameter of interest whose true value is subject to severe uncertainty and therefore is likely to be substantially wrong. Moreover, the considerations brought to bear upon the decision process itself also originate in the locality of this unreliable estimate, and so may or may not be reflective of the entire range of decisions and uncertainties.
Background, working assumptions, and a look ahead
Decision under severe uncertainty is a formidable task and the development of methodologies capable of handling this task is even a more arduous undertaking. Indeed, over the past sixty years an enormous effort has gone into the development of such methodologies. Yet, for all the knowledge and expertise that have accrued in this area of decision theory, no fully satisfactory general methodology is available to date.
Now, as portrayed in the infogap literature, InfoGap was designed expressly as a methodology for solving decision problems that are subject to severe uncertainty. And what is more, its aim is to seek solutions that are robust.
Thus, to have a clear picture of infogap's modus operandi and its role and place in decision theory and robust optimization, it is imperative to examine it within this context. In other words, it is necessary to establish infogap's relation to classical decision theory and robust optimization. To this end, the following questions must be addressed:
 What are the characteristics of decision problems that are subject to severe uncertainty?
 What difficulties arise in the modelling and solution of such problems?
 What type of robustness is sought?
 How does infogap theory address these issues?
 In what way is infogap decision theory similar to and/or different from other theories for decision under uncertainty?
Two important points need to be elucidated in this regard at the outset:
 Considering the severity of the uncertainty that infogap was designed to tackle, it is essential to clarify the difficulties posed by severe uncertainty.
 Since infogap is a nonprobabilistic method that seeks to maximize robustness to uncertainty, it is imperative to compare it to the single most important "nonprobabilistic" model in classical decision theory, namely Wald's Maximin paradigm (Wald 1945, 1950). After all, this paradigm has dominated the scene in classical decision theory for well over sixty years now.
So, first let us clarify the assumptions that are implied by severe uncertainty.
Working assumptions
Infogap decision theory employs three simple constructs to capture the uncertainty associated with decision problems:
 A parameter whose true value is subject to severe uncertainty.
 A region of uncertainty where the true value of lies.
 An estimate of the true value of .
It should be pointed out, though, that as such these constructs are generic, meaning that they can be employed to model situations where the uncertainty is not severe but mild, indeed very mild. So it is vital to be clear that to give apt expression to the severity of the uncertainty, in the InfoGap framework these three constructs are given specific meaning.
Working Assumptions
 The region of uncertainty is relatively large.
In fact, BenHaim (2006, p. 210) indicates that in the context of infogap decision theory most of the commonly encountered regions of uncertainty are unbounded. The estimate is a poor approximation of the true value of .
That is, the estimate is a poor indication of the true value of (BenHaim, 2006, p. 280) and is likely to be substantially wrong (BenHaim, 2006, p. 281).In the picture represents the true (unknown) value of .
The point to note here is that conditions of severe uncertainty entail that the estimate can—relatively speaking—be very distant from the true value . This is particularly pertinent for methodologies, like infogap, that seek robustness to uncertainty. Indeed, assuming otherwise would—methodologically speaking—be tantamount to engaging in wishful thinking.
In short, the situations that infogap is designed to take on are demanding in the extreme. Hence, the challenge that one faces conceptually, methodologically and technically is considerable. It is essential therefore to examine whether infogap robustness analysis succeeds in this task, and whether the tools that it deploys in this effort are different from those made available by Wald's (1945) Maximin paradigm especially for robust optimization.
So let us take a quick look at this stalwart of classical decision theory and robust optimization.
Wald's Maximin paradigm
The basic idea behind this famous paradigm can be expressed in plain language as follows:
Maximin Rule The maximin rule tells us to rank alternatives by their worst possible outcomes: we are to adopt the alternative the worst outcome of which is superior to the worst outcome of the others.Rawls ^{[65]}(1971, p. 152)
Thus, according to this paradigm, in the framework of decisionmaking under severe uncertainty, the robustness of an alternative is a measure of how well this alternative can cope with the worst uncertain outcome that it can generate. Needless to say, this attitude towards severe uncertainty often leads to the selection of highly conservative alternatives. This is precisely the reason that this paradigm is not always a satisfactory methodology for decisionmaking under severe uncertainty (Tintner 1952).
As indicated in the overview, infogap's robustness model is a Maximin model in disguise. More specifically, it is a simple instance of Wald's Maximin model where:
 The region of uncertainty associated with an alternative decision is an immediate neighborhood of the estimate .
 The uncertain outcomes of an alternative are determined by a characteristic function of the performance requirement under consideration.
Thus, aside from the conservatism issue, a far more serious issue must be addressed. This is the validity issue arising from the local nature of infogap's robustness analysis.
Local vs global robustness
The validity of the results generated by infogap's robustness analysis are crucially contingent on the quality of the estimate . Alas, according to infogap's own working assumptions, this estimate is poor and likely to be substantially wrong (BenHaim, 2006, p. 280281).
The trouble with this feature of infogap's robustness model is brought out more forcefully by the picture. The white circle represents the immediate neighborhood of the estimate on which the Maximin analysis is conducted. Since the region of uncertainty is large and the quality of the estimate is poor, it is very likely that the true value of is distant from the point at which the Maximin analysis is conducted.
So given the severity of the uncertainty under consideration, how valid/useful can this type of Maximin analysis really be?
The critical issue here is then to what extent can a local robustness analysis a la Maximin in the immediate neighborhood of a poor estimate aptly represent a large region of uncertainty. This is a serious issue that must be dealt with in this article.
It should be pointed out that, in comparison, robust optimization methods invariably take a far more global view of robustness. So much so that scenario planning and scenario generation are central issues in this area. This reflects a strong commitment to an adequate representation of the entire region of uncertainty in the definition of robustness and in the robustness analysis itself.
And finally there is another reason why the intimate relation to Maximin is crucial to this discussion. This has to do with the portrayal of infogap's contribution to the state of the art in decision theory, and its role and place visavis other methodologies.
Role and place in decision theory
Infogap is emphatic about its advancement of the state of the art in decision theory (color is used here for emphasis):
Infogap decision theory is radically different from all current theories of decision under uncertainty. The difference originates in the modelling of uncertainty as an information gap rather than as a probability.
BenHaim (2006, p.xii)In this book we concentrate on the fairly new concept of informationgap uncertainty, whose differences from more classical approaches to uncertainty are real and deep. Despite the power of classical decision theories, in many areas such as engineering, economics, management, medicine and public policy, a need has arisen for a different format for decisions based on severely uncertain evidence.BenHaim (2006, p. 11)
These strong claims must be substantiated. In particular, a clearcut, unequivocal answer must be given to the following question: in what way is infogap's generic robustness model different, indeed radically different, from worstcase analysis a la Maximin?
Subsequent sections of this article describe various aspects of infogap decision theory and its applications, how it proposes to cope with the working assumptions outlined above, the local nature of infogap's robustness analysis and its intimate relationship with Wald's classical Maximin paradigm and worstcase analysis.
Invariance property
The main point to keep in mind here is that infogap's raison d'être is to provide a methodology for decision under severe uncertainty. This means that its primary test would be in the efficacy of its handling of and coping with severe uncertainty. To this end it must be established first how InfoGap's robustness/opportuneness models behave/fare, as the severity of the uncertainty is increased/decreased.
Second, it must be established whether infogap's robustness/opportuneness models give adequate expression to the potential variability of the performance function over the entire region of uncertainty. This is particularly important because Info—Gap is usually concerned with relatively large, indeed unbounded, regions of uncertainty.
So, let denote the total region of uncertainty and consider these key questions:
 How does the robustness/opportuneness analysis respond to an increase/decrease in the size of ?
 How does an increase/decrease in the size of affect the robustness or opportuneness of a decision?
 How representative are the results generated by infogap's robustness/opportuneness analysis of what occurs in the relatively large total region of uncertainty ?
Suppose then that the robustness has been computed for a decision and it is observed that where for some .
The question is then: how would the robustness of , namely , be affected if the region of uncertainty would be say, twice as large as , or perhaps even 10 times as large as ?
Consider then the following result which is a direct consequence of the local nature of infogap's robustness/opportuneness analysis and the nesting property of infogaps' regions of uncertainty (Sniedovich 2007):
Invariance Theorem
The robustness of decision is invariant with the size of the total region of uncertainty for all such that

(7) for some
In other words, for any given decision, infogap's analysis yields the same results for all total regions of uncertainty that contain . This applies to both the robustness and opportuneness models.
This is illustrated in the picture: the robustness of a given decision does not change notwithstanding an increase in the region of uncertainty from to .
In short, by dint of focusing exclusively on the immediate neighborhood of the estimate infogap's robustness/opportuneness models are inherently local. For this reason they are  in principle  incapable of incorporating in the analysis of and regions of uncertainty that lie outside the neighborhoods and of the estimate , respectively.
To illustrate, consider a simple numerical example where the total region of uncertainty is the estimate is and for some decision we obtain . The picture is this:
where the term "No man's land" refers to the part of the total region of uncertainty that is outside the region .
Note that in this case the robustness of decision is based on its (worstcase) performance over no more than a minuscule part of the total region of uncertainty that is an immediate neighborhood of the estimate . Since usually infogap's total region of uncertainty is unbounded, this illustration represents a usual case rather than an exception.
The thing to note then is that infogap's robustness/opportuneness are by definition local properties. As such they cannot assess the performance of decisions over the total region of uncertainty. For this reason it is not clear how InfoGap's Robustness/Opportuneness models can provide a meaningful/sound/useful basis for decision under severe uncertainty where the estimate is poor and is likely to be substantially wrong.
This crucial issue is addressed in subsequent sections of this article.
Maximin/Minimin: playing robustness/opportuneness games with Nature
For well over sixty years now Wald's Maximin model has figured in classical decision theory and related areas – such as robust optimization  as the foremost nonprobabilistic paradigm for modeling and treatment of severe uncertainty.
Infogap is propounded (e.g. BenHaim 2001, 2006) as a new nonprobabilistic theory that is radically different from all current decision theories for decision under uncertainty. So, it is imperative to examine in this discussion in what way, if any, is infogap's robustness model radically different from Maximin. For one thing, there is a wellestablished assessment of the utility of Maximin. For example, Berger (Chapter 5)^{[66]} suggests that even in situations where no prior information is available (a best case for Maximin), Maximin can lead to bad decision rules and be hard to implement. He recommends Bayesian methodology. And as indicated above,
It should also be remarked that the minimax principle even if it is applicable leads to an extremely conservative policy.
Tintner (1952, p. 25)^{[67]}
However, quite apart from the ramifications that establishing this point might have for the utility of infogaps' robustness model, the reason that it behooves us to clarify the relationship between infogap and Maximin is the centrality of the latter in decision theory. After all, this is a major classical decision methodology. So, any theory claiming to furnish a new nonprobabilistic methodology for decision under severe uncertainty would be expected to be compared to this stalwart of decision theory. And yet, not only is a comparison of infogap's robustness model to Maximin absent from the three books expounding infogap (BenHaim 1996, 2001, 2006), Maximin is not even mentioned in them as the major decision theoretic methodology for severe uncertainty that it is.
Elsewhere in the infogap literature, one can find discussions dealing with similarities and differences between these two paradigms, as well as discussions on the relationship between infogap and worstcase analysis,^{[7]}^{[16]}^{[35]}^{[37]}^{[53]}^{[68]} However, the general impression is that the intimate connection between these two paradigms has not been identified. Indeed, the opposite is argued. For instance, BenHaim (2005^{[35]}) argues that infogap's robustness model is similar to Maximin but, is not a Maximin model.
The following quote eloquently expresses BenHaim's assessment of infogap's relationship to Maximin and it provides ample motivation for the analysis that follows.
We note that robust reliability is emphatically not a worstcase analysis. In classical worstcase minmax analysis the designer minimizes the impact of the maximally damaging case. But an infogap model of uncertainty is an unbounded family of nested sets: , for all . Consequently, there is no worst case: any adverse occurrence is less damaging than some other more extreme event occurring at a larger value of . What Eq. (1) expresses is the greatest level of uncertainty consistent with nofailure. When the designer chooses q to maximize he is maximizing his immunity to an unbounded ambient uncertainty. The closest this comes to "minmaxing" is that the design is chosen so that "bad" events (causing reward less than ) occur as "far away" as possible (beyond a maximized value of ).
BenHaim , 1999, pp. 271–2^{[69]}
The point to note here is that this statement misses the fact that the horizon of uncertainty is bounded above (implicitly) by the performance requirement
and that infogap conducts its worstcase analysis—one analysis at a time for a given  within each of the regions of uncertainty .
In short, given the discussions in the infogap literature on this issue, it is obvious that the kinship between infogap's robustness model and Wald's Maximin model, as well as infogap's kinship with other models of classical decision theory must be brought to light. So, the objective in this section is to place infogap's robustness and opportuneness models in their proper context, namely within the wider frameworks of classical decision theory and robust optimization.
The discussion is based on the classical decision theoretic perspective outlined by Sniedovich (2007^{[70]}) and on standard texts in this area (e.g. Resnik 1987,^{[63]} French 1988^{[64]}).
This is unavoidable because infogap's models are mathematical.
Generic models
The basic conceptual framework that classical decision theory provides for dealing with uncertainty is that of a twoplayer game. The two players are the decision maker (DM) and Nature, where Nature represents uncertainty. More specifically, Nature represents the DM's attitude towards uncertainty and risk.
Note that a clear distinction is made in this regard between a pessimistic decision maker and an optimistic decision maker, namely between a worstcase attitude and a bestcase attitude. A pessimistic decision maker assumes that Nature plays against him whereas an optimistic decision maker assumes that Nature plays with him.
To express these intuitive notions mathematically, classical decision theory uses a simple model consisting of the following three constructs:
 A set representing the decision space available to the DM.
 A set of sets representing state spaces associated with the decisions in .
 A function stipulating the outcomes generated by the decisionstate pairs .
The function is called objective function, payoff function, return function, cost function etc.
The decisionmaking process (game) defined by these objects consists of three steps:
 Step 1: The DM selects a decision .
 Step 2: In response, given , Nature selects a state .
 Step 3: The outcome is allotted to DM.
Note that in contrast to games considered in classical game theory, here the first player (DM) moves first so that the second player (Nature) knows what decision was selected by the first player prior to selecting her decision. Thus, the conceptual and technical complications regrding the existence of Nash equilibrium point are not pertinent here. Nature is not an independent player, it is a conceptual device describing the DM's attitude towards uncertainty and risk.
At first sight, the simplicity of this framework may strike one as naive. Yet, as attested by the variety of specific instances that it encompasses it is rich in possibilities, flexible, and versatile. For the purposes of this discussion it suffices to consider the following classical generic setup:
where and represent the DM's and Nature's optimality criteria, respectively, that is, each is equal to either or .
If then the game is cooperative, and if then the game is noncooperative. Thus, this format represents four cases: two noncooperative games (Maximin and Minimax) and two cooperative games (Minimin, and Maximax). The respective formulations are as follows:
Each case is specified by a pair of optimality criteria employed by DM and Nature. For example, Maximin depicts a situation where DM strives to maximize the outcome and Nature strives to minimize it. Similarly, the Minimin paradigm represents situations where both DM and Nature are striving to in minimize the outcome.
Of particular interest to this discussion are the Maximin and Minimin paradigms because they subsume infogap's robustness and opportuneness models, respectively. So, here they are:
Maximin Game:
 Step 1: The DM selects a decision with a view to maximize the outcome .
 Step 2: In response, given , Nature selects a state in that minimizes over .
 Step 3: The outcome is allotted to DM.
Minimin Game:
 Step 1: The DM selects a decision with a view to minimizes the outcome .
 Step 2: In response, given , Nature selects a state in that minimizes over .
 Step 3: The outcome is allotted to DM.
With this in mind, consider now infogap's robustness and opportuneness models.
Infogap's robustness model
From a classical decision theoretic point of view infogap's robustness model is a game between the DM and Nature, where the DM selects the value of (aiming for the largest possible) whereas Nature selects the worst value of in . In this context the worst value of pertaining to a given pair is a that violates the performance requirement . This is achieved by minimizing over .
There are various ways to incorporate the DM's objective and Nature's antagonistic response in a single outcome. For instance, one can use the following characteristic function for this purpose:
Note that, as desired, for any triplet of interest we have
hence from the DM's point of view satisficing the performance constraint is equivalent to maximizing .
In short,
Infogap's Maximin Robustness Game for decision :
 Step 1: The DM selects an horizon of uncertainty with a view to maximize the outcome .
 Step 2: In response, given , Nature selects a that minimizes over .
 Step 3: The outcome is allotted to DM.
Clearly, the DM's optimal alternative is to select the largest value of such that the worst satisfies the performance requirement.
Maximin Theorem
As shown in Sniedovich (2007),^{[47]} Infogap's robustness model is a simple instance of Wald's maximin model. Specifically,
Infogap's opportuneness model
By the same token, infogap's opportuneness model is a simple instance of the generic Minimin model. That is,
where
observing that, as desired, for any triplet of interest we have
hence, for a given pair , the DM would satisfy the performance requirement via minimizing the outcome over . Nature's behavior is a reflection of her sympathetic stance here.
Remark: This attitude towards risk and uncertainty which assumes that Nature will play with us, is rather naive. As noted by Resnik (1987, p. 32^{[63]}) "... But that rule surely would have few adherence...". Nevertheless it is often used in combination with the Maximin rule in the formulation of Hurwicz's optimismpessimisim rule (Resnik 1987,^{[63]} French 1988^{[64]}) with a view to mitigate the extreme conservatism of Maximin.
Mathematical programming formulations
To bring out more forcefully that infogap's robustness model is an instance of the generic Maximin model, and infogap's opportuneness model an instance of the generic Minimin model, it is instructive to examine the equivalent so called Mathematical Programming (MP) formats of these generic models (Ecker and Kupferschmid,^{[71]} 1988, pp. 24–25; Thie 1988^{[72]} pp. 314–317; Kouvelis and Yu,^{[59]} 1997, p. 27):
Thus, in the case of infogap we have
To verify the equivalence between infogap's formats and the respective decision theoretic formats, recall that, by construction, for any triplet of interest we have
This means that in the case of robustness/Maximin, an antagonistic Nature will (effectively) minimize by minimizing whereas in the case of opportuneness/Minimin a sympathetic Nature will (effectively) maximize by minimizing .
Summary
Infogap's robustness analysis stipulates that given a pair , the worst element of is realized. This of course is a typical Maximin analysis. In the parlance of classical decision theory:
The Robustness of decision is the largest horizon of uncertainty, , such that the worst value of in satisfies the performance requirement .
Similarly, infogap's opportuneness analysis stipulates that given a pair , the best element of is realized. This of course is a typical Minimin analysis. In the parlance of classical decision theory:
The Opportuneness of decision is the smallest horizon of uncertainty, , such that the best value of in satisfies the performance requirement .
The mathematical transliterations of these concepts are straightforward, resulting in typical Maximin/Minimin models, respectively.
Far from being restrictive, the generic Maximin/Minimin models' lean structure is a blessing in disguise. The main point here is that the abstract character of the three basic constructs of the generic models
 Decision
 State
 Outcome
in effect allows for great flexibility in modeling.
A more detailed analysis is therefore required to bring out the full force of the relationship between infogap and generic classical decision theoretic models. See #Notes on the art of math modeling.
Treasure hunt
The following is a pictorial summary of Sniedovich's (2007) discussion on local vs global robustness. For illustrative purposes it is cast here as a Treasure Hunt. It shows how the elements of infogap's robustness model relate to one another and how the severe uncertainty is treated in the model.
File:Australia plain.png  (1) You are in charge of a treasure hunt on a large island somewhere in the Asia/Pacific region. You consult a portfolio of search strategies. You need to decide which strategy would be best for this particular expedition.  File:Australia q.png  (2) The difficulty is that the treasure's exact location on the island is unknown. There is a severe gap between what you need to know—the true location of the treasure—and what you actually know—a poor estimate of the true location.  File:Australia dot.png  (3) Somehow you compute an estimate of the true location of the treasure. Since we are dealing here with severe uncertainty, we assume—methodologically speaking—that this estimate is a poor indication of the true location and is likely to be substantially wrong. 
File:Australia regions.png  (4) To determine the robustness of a given strategy, you conduct a local worstcase analysis in the immediate neighborhood of the poor estimate. Specifically, you compute the largest safe deviation from the poor estimate that does not violate the performance requirement.  File:Australia max.png  (5) You compute the robustness of each search strategy in your portfolio and you select the one whose robustness is the largest.  (6) To remind yourself and the financial backers of the expedition that this analysis is subject to severe uncertainty in the true location of the treasure, it is important—methodologically speaking—to display the true location on the map. Of course, you do not know the true location. But given the severity of the uncertainty, you place it at some distance from the poor estimate. The more severe the uncertainty, the greater should the distance (gap) between the true location and the estimate be.  
File:Australia true.png  Epilogue: According to Sniedovich (2007) this is an important reminder of the central issue in decisionmaking under severe uncertainty. The estimate we have is a poor indication of the true value of the parameter of interest and is likely to be substantially wrong. Therefore, in the case of infogap it is important to show the gap on the map by displaying the true value of somewhere in the region of uncertainty. The small red represents the true (unknown) location of the treasure. 
In summary:
Infogap's robustness model is a mathematical representation of a local worstcase analysis in the neighborhood of a given estimate of the true value of the parameter of interest. Under severe uncertainty the estimate is assumed to be a poor indication of the true value of the parameter and is likely to be substantially wrong.
The fundamental question therefore is: Given the
 Severity of the uncertainty
 Local nature of the analysis
 Poor quality of the estimate
how meaningful and useful are the results generated by the analysis, and how sound is the methodology as a whole?
More on this criticism can be found on Sniedovich's web site.
Notes on the art of math modeling
Constraint satisficing vs payoff optimization
Any satisficing problem can be formulated as an optimization problem. To see that this is so, let the objective function of the optimization problem be the indicator function of the constraints pertaining to the satisficing problem. Thus, if our concern is to identify a worstcase scenario pertaining to a constraint, this can be done via a suitable Maximin/Minimax worstcase analysis of the indicator function of the constraint.
This means that the generic decision theoretic models can handle outcomes that are induced by constraint satisficing requirements rather than by say payoff maximization.
In particular, note the equivalence
where
and therefore
In practical terms, this means that an antagonistic Nature will aim to select a state that will violate the constraint whereas a sympathetic Nature will aim to select a state that will satisfy the constraint. As for the outcome, the penalty for violating the constraint is such that the decision maker will refrain from selecting a decision that will allow Nature to violate the constraint within the state space pertaining to the selected decision.
The role of "min" and "max"
It should be stressed that the feature according infogap's robustness model its typical Maximin character is not the presence of both and in the formulation of the infogap model. Rather, the reason for this is a deeper one. It goes to the heart of the conceptual framework that the Maximin model captures: Nature playing against the DM. This is what is crucial here.
To see that this is so, let us generalize infogap's robustness model and consider the following modified model instead:
where in this context is some set and is some function on . Note that it is not assumed that is a realvalued function. Also note that "min" is absent from this model.
All we need to do to incorporate a min into this model is to express the constraint
as a worstcase requirement. This is a straightforward task, observing that for any triplet of interest we have
where
hence,
which, of course, is a Maximin model a la Mathematical Programming.
In short,
Note that although the model on the left does not include an explicit "min", it is nevertheless a typical Maximin model. The feature rendering it a Maximin model is the requirement which lends itself to an intuitive worstcase formulation and interpretation.
In fact, the presence of a double "max" in an infogap robustness model does not necessarily alter the fact that this model is a Maximin model. For instance, consider the robustness model
This is an instance of the following Maximin model
where
The "inner min" indicates that Nature plays against the DM—the "max" player—hence the model is a robustness model.
The nature of the infogap/Maximin/Minimin connection
This modeling issue is discussed here because claims have been made that although there is a close relationship between infogap's robustness and opportuneness models and the generic Maximin and Minimin models, respectively, the description of infogap as an instance of these models is too strong. The argument put forward is that although it is true that infogap's robustness model can be expressed as a Maximin model, the former is not an instance of the latter.
This objection apparently stems from the fact that any optimization problem can be formulated as a Maximin model by a simple employment of dummy variables. That is, clearly
where
for any arbitrary nonempty set .
The point of this objection seems to be that we are running the risk of watering down the meaning of the term instance if we thus contend that any minimization problem is an instance of the Maximin model.
It must therefore be pointed out that this concern is utterly unwarranted in the case of the infogap/Maximin/Minimin relation. The correspondence between infogap's robustness model and the generic Maximin model is neither contrived nor is it formulated with the aid of dummy objects. The correspondence is immediate, intuitive, and compelling hence, aptly described by the term instance of .
Specifically, as shown above, infogap's robustness model is an instance of the generic Maximin model specified by the following constructs:
Furthermore, those objecting to the use of the term instance of should note that the Maximin model formulated above has an equivalent so called Mathematical Programming (MP) formulation deriving from the fact that
where denotes the real line.
So here are side by side infogap's robustness model and the two equivalent formulations of the generic Maximin paradigm:
Note that the equivalence between these three representations of the same decisionmaking situation makes no use of dummy variables. It is based on the equivalence
deriving directly from the definition of the characteristic function .
Clearly then, infogap's robustness model is an instance of the generic Maximin model.
Similarly, for infogap's opportuneness model we have
Again, it should be stressed that the equivalence between these three representations of the same decisionmaking situation makes no use of dummy variables. It is based on the equivalence
deriving directly from the definition of the characteristic function .
Thus, to "help" the DM minimize , a sympathetic Nature will select a that minimizes over .
Clearly, infogap's opportuneness model is an instance of the generic Minimin model.
Other formulations
There are of course other valid representations of the robustness/opportuneness models. For instance, in the case of the robustness model, the outcomes can be defined as follows (Sniedovich 2007^{[70]}) :
where the binary operation is defined as follows:
The corresponding MP format of the Maximin model would then be as follows:
In words, to maximize the robustness, the DM selects the largest value of such that the performance constraint is satisfied by all . In plain language: the DM selects the largest value of whose worst outcome in the region of uncertainty of size satisfies the performance requirement.
Simplifications
As a rule the classical Maximin formulations are not particularly useful when it comes to solving the problems they represent, as no "general purpose" Maximin solver is available (Rustem and Howe 2002^{[60]}).
It is common practice therefore to simplify the classical formulation with a view to derive a formulation that would be readily amenable to solution. This is a problemspecific task which involves exploiting a problem's specific features. The mathematical programming format of Maximin is often more userfriendly in this regard.
The best example is of course the classical Maximin model of 2person zerosum games which after streamlining is reduced to a standard linear programming model (Thie 1988,^{[72]} pp. 314–317) that is readily solved by linear programming algorithms.
To reiterate, this linear programming model is an instance of the generic Maximin model obtained via simplification of the classical Maximin formulation of the 2person zerosum game.
Another example is dynamic programming where the Maximin paradigm is incorporated in the dynamic programming functional equation representing sequential decision processes that are subject to severe uncertainty (e.g. Sniedovich 2003^{[73]}^{[74]}).
Summary
Recall that in plain language the Maximin paradigm maintains the following:
Maximin Rule The maximin rule tells us to rank alternatives by their worst possible outcomes: we are to adopt the alternative the worst outcome of which is superior to the worst outcome of the others.Rawls (1971, p. 152)
Infogap's robustness model is a simple instance of this paradigm that is characterized by a specific decision space, state spaces and objective function, as discussed above.
Much can be gained by viewing infogap's theory in this light.
See also
External links
 InfoGap Theory and Its Applications, further information on infogap theory
 InfoGap Campaign, further analysis and critique of infogap
Notes
 ↑ Here are some examples: In many fields, including engineering, economics, management, biological conservation, medicine, homeland security, and more, analysts use models and data to evaluate and formulate decisions. An infogap is the disparity between what is known and what needs to be known in order to make a reliable and responsible decision. Infogaps are Knightian uncertainties: a lack of knowledge, an incompleteness of understanding. Infogaps are nonprobabilistic and cannot be insured against or modelled probabilistically. A common infogap, though not the only kind, is uncertainty in the value of a parameter or of a vector of parameters, such as the durability of a new material or the future rates or return on stocks. Another common infogap is uncertainty in the shape of a probability distribution. Another infogap is uncertainty in the functional form of a property of the system, such as friction force in engineering, or the Phillips curve in economics. Another infogap is in the shape and size of a set of possible vectors or functions. For instance, one may have very little knowledge about the relevant set of cardiac waveforms at the onset of heart failure in a specific individual.
References
 ↑ Yakov BenHaim, InformationGap Theory: Decisions Under Severe Uncertainty, Academic Press, London, 2001.
 ↑ Yakov BenHaim, InfoGap Theory: Decisions Under Severe Uncertainty, 2nd edition, Academic Press, London, 2006.
 ↑ ^{3.0} ^{3.1} ^{3.2} Sniedovich, M. (2010). "A bird's view of infogap decision theory". Journal of Risk Finance. 11 (3): 268–283. doi:10.1108/15265941011043648.
 ↑ How Did InfoGap Theory Start? How Does it Grow?
 ↑ ^{5.0} ^{5.1} Yakov BenHaim, Robust Reliability in the Mechanical Science, Springer, Berlin ,1996.
 ↑ Hipel, Keith W.; BenHaim, Yakov (1999). "Decision making in an uncertain world: Informationgap modelling in water resources management". IEEE Trans., Systems, Man and Cybernetics. 29 (4): 506–517. doi:10.1109/5326.798765.
 ↑ ^{7.0} ^{7.1} ^{7.2} Yakov BenHaim, 2005, Infogap Decision Theory For Engineering Design. Or: Why `Good' is Preferable to `Best', appearing as chapter 11 in Engineering Design Reliability Handbook, Edited by Efstratios Nikolaidis, Dan M.Ghiocel and Surendra Singhal, CRC Press, Boca Raton.
 ↑ ^{8.0} ^{8.1} Kanno, Y.; Takewaki, I. (2006). "Robustness analysis of trusses with separable load and structural uncertainties". International Journal of Solids and Structures. 43 (9): 2646–2669. doi:10.1016/j.ijsolstr.2005.06.088.
 ↑ ^{9.0} ^{9.1} Kaihong Wang, 2005, Vibration Analysis of Cracked Composite Bendingtorsion Beams for Damage Diagnosis, PhD thesis, Virginia Politechnic Institute, Blacksburg, Virginia.
 ↑ ^{10.0} ^{10.1} Kanno, Y.; Takewaki, I. (2006). "Sequential semidefinite program for maximum robustness design of structures under load uncertainty". Journal of Optimization Theory and Applications. 130 (2): 265–287. doi:10.1007/s109570069102z.
 ↑ ^{11.0} ^{11.1} Pierce, S.G.; Worden, K.; Manson, G. (2006). "A novel informationgap technique to assess reliability of neural networkbased damage detection". Journal of Sound and Vibration. 293 (1–2): 96–111. doi:10.1016/j.jsv.2005.09.029.
 ↑ Pierce, Gareth; BenHaim, Yakov; Worden, Keith; Manson, Graeme (2006). "Evaluation of neural network robust reliability using informationgap theory". IEEE Transactions on Neural Networks. 17 (6): 1349–1361. PMID 17131652. doi:10.1109/TNN.2006.880363.
 ↑ ^{13.0} ^{13.1} Chetwynd, D.; Worden, K.; Manson, G. (2074). "An application of intervalvalued neural networks to a regression problem". Proceedings of the Royal Society A. 462: 3097–3114. doi:10.1098/rspa.2006.1717. Check date values in:
date=
(help)  ↑ Lim, D.; Ong, Y. S.; Jin, Y.; Sendhoff, B.; Lee, B. S. (2006). "Inverse Multiobjective Robust Evolutionary Design". Genetic Programming and Evolvable Machines. 7 (4): 383–404. doi:10.1007/s1071000690137.
 ↑ Vinot, P.; Cogan, S.; Cipolla, V. (2005). "A robust modelbased test planning procedure". Journal of Sound and Vibration. 288 (3): 571–585. doi:10.1016/j.jsv.2005.07.007.
 ↑ ^{16.0} ^{16.1} ^{16.2} Takewaki, Izuru; BenHaim, Yakov (2005). "Infogap robust design with load and model uncertainties". Journal of Sound and Vibration. 288 (3): 551–570. doi:10.1016/j.jsv.2005.07.005.
 ↑ Izuru Takewaki and Yakov BenHaim, 2007, Infogap robust design of passively controlled structures with load and model uncertainties, Structural Design Optimization Considering Uncertainties, Yiannis Tsompanakis, Nikkos D. Lagaros and Manolis Papadrakakis, editors, Taylor and Francis Publishers.
 ↑ Hemez, Francois M.; BenHaim, Yakov (2004). "Infogap robustness for the correlation of tests and simulations of a nonlinear transient". Mechanical Systems and Signal Processing. 18 (6): 1443–1467. doi:10.1016/j.ymssp.2004.03.001.
 ↑ ^{19.0} ^{19.1} Levy, Jason K.; Hipel, Keith W.; Kilgour, Marc (2000). "Using environmental indicators to quantify the robustness of policy alternatives to uncertainty". Ecological Modelling. 130 (1–3): 79–86. doi:10.1016/S03043800(00)00226X.
 ↑ Moilanen, A.; Wintle, B.A. (2006). "Uncertainty analysis favours selection of spatially aggregated reserve structures". Biological Conservation. 129 (3): 427–434. doi:10.1016/j.biocon.2005.11.006.
 ↑ Halpern, Benjamin S.; Regan, Helen M.; Possingham, Hugh P.; McCarthy, Michael A. (2006). "Accounting for uncertainty in marine reserve design". Ecology Letters. 9 (1): 2–11. PMID 16958861. doi:10.1111/j.14610248.2005.00827.x.
 ↑ Regan, Helen M.; BenHaim, Yakov; Langford, Bill; Wilson, Will G.; Lundberg, Per; Andelman, Sandy J.; Burgman, Mark A. (2005). "Robust decision making under severe uncertainty for conservation management". Ecological Applications. 15 (4): 1471–1477. doi:10.1890/035419.
 ↑ McCarthy, M.A.; Lindenmayer, D.B. (2007). "Infogap decision theory for assessing the management of catchments for timber production and urban water supply". Environmental Management. 39 (4): 553–562. PMID 17318697. doi:10.1007/s0026700600223.
 ↑ Crone, Elizabeth E.; Pickering, Debbie; Schultz, Cheryl B. (2007). "Can captive rearing promote recovery of endangered butterflies? An assessment in the face of uncertainty". Biological Conservation. 139 (1–2): 103–112. doi:10.1016/j.biocon.2007.06.007.
 ↑ L. Joe Moffitt, John K. Stranlund and Craig D. Osteen, 2007, Robust detection protocols for uncertain introductions of invasive species, Journal of Environmental Management, In Press, Corrected Proof, Available online 27 August 2007.
 ↑ Burgman, M. A.; Lindenmayer, D.B.; Elith, J. (2005). "Managing landscapes for conservation under uncertainty". Ecology. 86 (8): 2007–2017. doi:10.1890/040906.
 ↑ Moilanen, A.; Elith, J.; Burgman, M.; Burgman, M (2006). "Uncertainty analysis for regionalscale reserve selection". Conservation Biology. 20 (6): 1688–1697. PMID 17181804. doi:10.1111/j.15231739.2006.00560.x.
 ↑ Moilanen, Atte; Runge, Michael C.; Elith, Jane; Tyre, Andrew; Carmel, Yohay; Fegraus, Eric; Wintle, Brendan; Burgman, Mark; Benhaim, Y (2006). "Planning for robust reserve networks using uncertainty analysis". Ecological Modelling. 199 (1): 115–124. doi:10.1016/j.ecolmodel.2006.07.004.
 ↑ Nicholson, Emily; Possingham, Hugh P. (2007). "Making conservation decisions under uncertainty for the persistence of multiple species". Ecological Applications. 17 (1): 251–265. PMID 17479849. doi:10.1890/10510761(2007)017[0251:MCDUUF]2.0.CO;2.
 ↑ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge.
 ↑ Carmel, Yohay; BenHaim, Yakov (2005). "Infogap robustsatisficing model of foraging behavior: Do foragers optimize or satisfice?". American Naturalist. 166 (5): 633–641. PMID 16224728. doi:10.1086/491691.
 ↑ Moffitt, Joe; Stranlund, John K.; Field, Barry C. (2005). "Inspections to Avert Terrorism: Robustness Under Severe Uncertainty". Journal of Homeland Security and Emergency Management. 2 (3): 3. doi:10.2202/15477355.1134.
 ↑ ^{33.0} ^{33.1} BeresfordSmith, Bryan; Thompson, Colin J. (2007). "Managing credit risk with infogap uncertainty". The Journal of Risk Finance. 8 (1): 24–34. doi:10.1108/15265940710721055.
 ↑ John K. Stranlund and Yakov BenHaim, (2007), Pricebased vs. quantitybased environmental regulation under Knightian uncertainty: An infogap robust satisficing perspective, Journal of Environmental Management, In Press, Corrected Proof, Available online 28 March 2007.
 ↑ ^{35.0} ^{35.1} ^{35.2} ^{35.3} BenHaim, Yakov (2005). "Value at risk with Infogap uncertainty". Journal of Risk Finance. 6 (5): 388–403. doi:10.1108/15265940510633460.
 ↑ BenHaim, Yakov; Laufer, Alexander (1998). "Robust reliability of projects with activityduration uncertainty". ASCE Journal of Construction Engineering and Management. 124 (2): 125–132. doi:10.1061/(ASCE)07339364(1998)124:2(125).
 ↑ ^{37.0} ^{37.1} ^{37.2} ^{37.3} Tahan, Meir; BenAsher, Joseph Z. (2005). "Modeling and analysis of integration processes for engineering systems". Systems Engineering. 8 (1): 62–77. doi:10.1002/sys.20021.
 ↑ Regev, Sary; Shtub, Avraham; BenHaim, Yakov (2006). "Managing project risks as knowledge gaps". Project Management Journal. 37 (5): 17–25.
 ↑ Fox, D.R.; BenHaim, Y.; Hayes, K.R.; McCarthy, M.; Wintle, B.; Dunstan, P. (2007). "An InfoGap Approach to Power and Samplesize calculations". Environmetrics. 18 (2): 189–203. doi:10.1002/env.811.
 ↑ BenHaim, Yakov (1994). "Convex models of uncertainty: Applications and Implications". Erkenntnis: an International Journal of Analytic Philosophy. 41 (2): 139–156. doi:10.1007/BF01128824.
 ↑ BenHaim, Yakov (1999). "Setmodels of informationgap uncertainty: Axioms and an inference scheme". Journal of the Franklin Institute. 336 (7): 1093–1117. doi:10.1016/S00160032(99)000241.
 ↑ BenHaim, Yakov (2000). "Robust rationality and decisions under severe uncertainty". Journal of the Franklin Institute. 337 (2–3): 171–199. doi:10.1016/S00160032(00)000168.
 ↑ BenHaim, Yakov (2004). "Uncertainty, probability and informationgaps". Reliability Engineering and System Safety. 85: 249–266. doi:10.1016/j.ress.2004.03.015.
 ↑ George J. Klir, 2006, Uncertainty and Information: Foundations of Generalized Information Theory, Wiley Publishers.
 ↑ Yakov BenHaim, 2007, Peirce, Haack and Infogaps, in Susan Haack, A Lady of Distinctions: The Philosopher Responds to Her Critics, edited by Cornelis de Waal, Prometheus Books.
 ↑ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge, pp.399.
 ↑ ^{47.0} ^{47.1} ^{47.2} ^{47.3} ^{47.4} Sniedovich, M. (2007). "The art and science of modeling decisionmaking under severe uncertainty" (PDF). DecisionMaking in Manufacturing and Services. 1 (1–2): 109–134.
 ↑ Simon, Herbert A. (1959). "Theories of decision making in economics and behavioral science". American Economic Review. 49: 253–283.
 ↑ Schwartz, Barry, 2004, Paradox of Choice: Why More Is Less, Harper Perennial.
 ↑ Conlisk, John (1996). "Why bounded rationality?". Journal of Economic Literature. XXXIV: 669–700.
 ↑ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge, pp.391, 394.
 ↑ ^{52.0} ^{52.1} Vinot, P.; Cogan, S.; Cipolla, V. (2005). "A robust modelbased test planning procedure". Journal of Sound and Vibration. 288 (3): 572. doi:10.1016/j.jsv.2005.07.007.
 ↑ ^{53.0} ^{53.1} Z. BenHaim and Y. C. Eldar, Maximum set estimators with bounded estimation error, IEEE Trans. Signal Processing, vol. 53, no. 8, August 2005, pp. 31723182.
 ↑ Babuška, I., F. Nobile and R. Tempone, 2005, Worst case scenario analysis for elliptic problems with uncertainty, Numerische Mathematik (in English) vol.101 pp.185–219.
 ↑ BenHaim, Yakov; Cogan, Scott; Sanseigne, Laetitia (1998). "Usability of Mathematical Models in Mechanical Decision Processes". Mechanical Systems and Signal Processing. 12: 121–134. doi:10.1006/mssp.1996.0137.
 ↑ (See also chapter 4 in Yakov BenHaim, Ref. 2.)
 ↑ Rosenhead, M.J.; Elton, M.; Gupta, S.K. (1972). "Robustness and Optimality as Criteria for Strategic Decisions". Operational Research Quarterly. 23 (4): 413–430. doi:10.1057/jors.1972.72.
 ↑ Rosenblatt, M.J.; Lee, H.L. (1987). "A robustness approach to facilities design". International Journal of Production Research. 25 (4): 479–486. doi:10.1080/00207548708919855.
 ↑ ^{59.0} ^{59.1} P. Kouvelis and G. Yu, 1997, Robust Discrete Optimization and Its Applications, Kluwer.
 ↑ ^{60.0} ^{60.1} B. Rustem and M. Howe, 2002, Algorithms for Worstcase Design and Applications to Risk Management, Princeton University Press.
 ↑ R.J. Lempert, S.W. Popper, and S.C. Bankes, 2003, Shaping the Next One Hundred Years: New Methods for Quantitative, LongTerm Policy Analysis, The Rand Corporation.
 ↑ A. BenTal, L. El Ghaoui, and A. Nemirovski, 2006, Mathematical Programming, Special issue on Robust Optimization, Volume 107(12).
 ↑ ^{63.0} ^{63.1} ^{63.2} ^{63.3} Resnik, M.D., Choices: an Introduction to Decision Theory, University of Minnesota Press, Minneapolis, MN, 1987.
 ↑ ^{64.0} ^{64.1} ^{64.2} French, S.D., Decision Theory, Ellis Horwood, 1988.
 ↑ Rawls, J. Theory of Justice, 1971, Belknap Press, Cambridge, MA.
 ↑ James O Berger (2006; really 1985). Statistical decision theory and Bayesian analysis (Second ed.). New York: Springer Science + Business Media. ISBN 0387960988. Check date values in:
date=
(help)  ↑ Tintner, G. (1952). "Abraham Wald's contributions to econometrics". The Annals of Mathematical Statistics. 23 (1): 21–28. doi:10.1214/aoms/1177729482.
 ↑ Babuška, I.; Nobile, F.; Tempone, R. (2005). "Worst case scenario analysis for elliptic problems with uncertainty". Numerische Mathematik. 101 (2): 185–219. doi:10.1007/s002110050601x.
 ↑ BenHaim, Y. (1999). "Design certification with informationgap uncertainty". Structural Safety. 2: 269–289. doi:10.1016/s01674730(99)000235.
 ↑ ^{70.0} ^{70.1} Sniedovich, M. (2007). "The art and science of modeling decisionmaking under severe uncertainty" (PDF). DecisionMaking in Manufacturing and Services. 1 (1–2): 111–136.
 ↑ Ecker J.G. and Kupferschmid, M., Introduction to Operations Research, Wiley, 1988.
 ↑ ^{72.0} ^{72.1} Thie, P., An Introduction to Linear Programming and Game Theory, Wiley, NY, 1988.
 ↑ Sniedovich, M. (2003). "OR/MS Games: 3. The Counterfeit coin problem". INFORMS Transactions in Education. 3 (2): 32–41. doi:10.1287/ited.3.2.32.
 ↑ Sniedovich, M. (2003). "OR/MS Games: 4. The joy of eggdropping in Braunschweig and Hong Kong". INFORMS Transactions on Education. 4 (1): 48–64. doi:10.1287/ited.4.1.48.
 CS1 errors: dates
 Wikipedia articles needing reorganization from April 2012
 Wikipedia references cleanup from April 2012
 All articles needing references cleanup
 Wikipedia articles needing style editing from June 2014
 Pages with broken file links
 Articles with unsourced statements from February 2008
 Decision theory
 Robust statistics