Jean-François Mertens

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
Jean-François Mertens
200px
Born (1946-03-11)March 11, 1946
Antwerp, Belgium
Died Script error: The function "death_date_and_age" does not exist.[1]
Nationality Belgium
Fields Game Theory
Mathematical economics
Alma mater Université Catholique de Louvain
Docteur ès Sciences 1970
Doctoral advisor José Paris
Jacques Neveu
Influences Robert Aumann
Reinhard Selten
John Harsanyi
John von Neumann
Influenced Claude d'Aspremont
Bernard De Meyer
Amrita Dhillon
Francoise Forges
Jean Gabszewicz
Srihari Govindan
Abraham Neyman
Anna Rubinchik
Sylvain Sorin
Notable awards Econometric Society Fellow
von Neumann Lecturer of Game Theory Society

Jean-François Mertens (11 March 1946 – 17 July 2012) was a Belgian game theorist and mathematical economist.[1]

Jean-François Mertens made some contributions to probability theory[2] and published articles on elementary topology,[3][4] but he was mostly active in economic theory. In particular, he contributed to order-book of market games, cooperative games, noncooperative games, repeated games, epistemic models of strategic behavior, and refinements of Nash equilibrium (see solution concept).

In cooperative game theory he contributed to the solution concepts called the core and the Shapley value. Regarding repeated games and stochastic games, Mertens 1982[5] and 1986[6] survey articles, and his 1994[7] survey co-authored with Sylvain Sorin and Shmuel Zamir, are compendiums of results on this topic, including his own contributions.

Epistemic models

Mertens and Zamir[8][9] implemented John Harsanyi's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types. They constructed a universal space of types in which, subject to specified consistency conditions, each type corresponds to the infinite hierarchy of his probabilistic beliefs about others' probabilistic beliefs. They also showed that any subspace can be approximated arbitrarily closely by a finite subspace, which is the usual tactic in applications.[10]

Repeated games with incomplete information

Repeated games with incomplete information, were pioneered by Aumann and Maschler.[11][12] Two of Jean-François Mertens contributions to the field are the extensions of repeated two person zero-sum games with incomplete information on both sides for both (1) the type of information available to players and (2) the signalling structure.[13]

  • (1) Information: Mertens extended the theory from the independent case where the private information of the players is generated by independent random variables, to the dependent case where correlation is allowed.
  • (2) Signalling structures: the standard signalling theory where after each stage both players are informed of the previous moves played, was extended to deal with general signalling structure where after each stage each player gets a private signal that may depend on the moves and on the state.

In those set-ups Jean-François Mertens provided an extension of the characterization of the minmax and maxmin value for the infinite game in the dependent case with state independent signals.[14] Additionally with Shmuel Zamir,[15] Jean-François Mertens showed the existence of a limiting value. Such a value can be thought either as the limit of the values v_n of the n stage games, as n goes to infinity, or the limit of the values v_{\lambda} of the {\lambda}-discounted games, as agents become more patient and {\lambda}\to 1.

A building block of Mertens and Zamir's approach is the construction of an operator, now simply referred to as the MZ operator in the field in their honor. In continuous time (differential games with incomplete information), the MZ operator becomes an infinitesimal operator at the core of the theory of such games.[16][17][18] Unique solution of a pair of functional equations, Mertens and Zamir showed that the limit value may be a transcendental function unlike the maxmin or the minmax (value in the complete information case). Mertens also found the exact rate of convergence in the case of game with incomplete information on one side and general signalling structure.[19] A detailed analysis of the speed of convergence of the n-stage game (finitely repeated) value to its limit has profound links to the central limit theorem and the normal law, as well as the maximal variation of bounded martingales.[20][21] Attacking the study of the difficult case of games with state dependent signals and without recursive structure, Mertens and Zamir introduced new tools on the introduction based on an auxiliary game, reducing down the set of strategies to a core that is 'statistically sufficient.'[22][23]

Collectively Jean-François Mertens's contributions with Zamir (and also with Sorin) provide the foundation for a general theory for two person zero sum repeated games that encompasses stochastic and incomplete information aspects and where concepts of wide relevance are deployed as for example reputation, bounds on rational levels for the payoffs, but also tools like splitting lemma, signalling and approachability. While in many ways Mertens's work here goes back to the von Neumann original roots of game theory with a zero-sum two person set up, vitality and innovations with wider application have been pervasive.

Stochastic games

Stochastic games were introduced by Lloyd Shapley in 1953.[24] The first paper studied the discounted two-person zero-sum stochastic game with finitely many states and actions and demonstrates the existence of a value and stationary optimal strategies. The study of the undiscounted case evolved in the following three decades, with solutions of special cases by Blackwell and Ferguson in 1968[25] and Kohlberg in 1974. The existence of an undiscounted value in a very strong sense, both a uniform value and a limiting average value, was proved in 1981 by Jean-François Mertens and Abraham Neyman.[26] The study of the non-zero-sum with a general state and action spaces attracted much attention, and Mertens and Parthasarathy[27] proved a general existence result under the condition that the transitions, as a function of the state and actions, are norm continuous in the actions.

Market games: limit price mechanism

Mertens had the idea to use linear competitive economies as an order book (trading) to model limit orders and generalize double auctions to a multivariate set up.[28] Acceptable relative prices of players are conveyed by their linear preferences, money can be one of the goods and it is ok for agents to have positive marginal utility for money in this case (after all agents are really just orders!). In fact this is the case for most order in practice. More than one order (and corresponding order-agent) can come from same actual agent. In equilibrium good sold must have been at a relative price compared to the good bought no less than the one implied by the utility function. Goods brought to the market (quantities in the order) are conveyed by initial endowments. Limit order are represented as follows: the order-agent brings one good to the market and has non zero marginal utilities in that good and another one (money or numeraire). An at market sell order will have a zero utility for the good sold at market and positive for money or the numeraire. Mertens clears orders creating a matching engine by using the competitive equilibrium – in spite of most usual interiority conditions being violated for the auxiliary linear economy. Mertens's mechanism provides a generalization of Shapley–Shubik trading posts and has the potential of a real life implementation with limit orders across markets rather than with just one specialist in one market.

Shapley value

The diagonal formula in the theory of non-atomic cooperatives games elegantly attributes the Shapley value of each infinitesimal player as his marginal contribution to the worth of a perfect sample of the population of players when averaged over all possible sample sizes. Such a marginal contribution has been most easily expressed in the form of a derivative—leading to the diagonal formula formulated by Aumann and Shapley. This is the historical reason why some differentiability conditions have been originally required to define Shapley value of non-atomic cooperative games. But first exchanging the order of taking the "average over all possible sample sizes" and taking such a derivative, Jean-François Mertens uses the smoothing effect of such an averaging process to extend the applicability of the diagonal formula.[29] This trick alone works well for majority games (represented by a step function applied on the percentage of population in the coalition). Exploiting even further this commutation idea of taking averages before taking derivative Jean-François Mertens expends by looking at invariant transformations and taking averages over those before taking the derivative. Doing so Mertens expends the diagonal formula to a much larger space of games, defining a Shapley value at the same time.[30][31]

Refinements and Mertens-stable equilibria

Solution concepts that are refinements[32] of Nash equilibrium have been motivated primarily by arguments for backward induction and forward induction. Backward induction posits that a player's optimal action now anticipates the optimality of his and others' future actions. The refinement called subgame perfect equilibrium implements a weak version of backward induction, and increasingly stronger versions are sequential equilibrium, perfect equilibrium, quasi-perfect equilibrium, and proper equilibrium, where the latter three are obtained as limits of perturbed strategies. Forward induction posits that a player's optimal action now presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction[33] is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached. In particular since completely mixed Nash equilibrium are sequential – such equilibria when they exist satisfy both forward and backward induction. In his work Mertens manages for the first time to select Nash equilibria that satisfy both forward and backward induction. The method is to let such feature be inherited from perturbed games that are forced to have completely mixed strategies—and the goal is only achieved with Mertens-stable equilibria, not with the simpler Kohlberg Mertens equilibria.

Elon Kohlberg and Mertens[34] emphasized that a solution concept should be consistent with an admissible decision rule. Moreover, it should satisfy the invariance principle that it should not depend on which among the many equivalent representations of the strategic situation as an extensive-form game is used. In particular, it should depend only on the reduced normal form of the game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens[35][36] emphasized also the importance of the small worlds principle that a solution concept should depend only on the ordinal properties of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs.

Kohlberg and Mertens defined tentatively a set-valued solution concept called stability for games with finite numbers of pure strategies that satisfies admissibility, invariance and forward induction, but a counterexample showed that it need not satisfy backward induction; viz. the set might not include a sequential equilibrium. Subsequently, Mertens[37][38] defined a refinement, also called stability and now often called a set of Mertens-stable equilibria, that has several desirable properties:

  • Admissibility and Perfection: All equilibria in a stable set are perfect, hence admissible.
  • Backward Induction and Forward Induction: A stable set includes a proper equilibrium of the normal form of the game that induces a quasi-perfect and sequential equilibrium in every extensive-form game with perfect recall that has the same normal form. A subset of a stable set survives iterative elimination of weakly dominated strategies and strategies that are inferior replies at every equilibrium in the set.
  • Invariance and Small Worlds: The stable sets of a game are the projections of the stable sets of any larger game in which it is embedded while preserving the original players' feasible strategies and payoffs.
  • Decomposition and Player Splitting. The stable sets of the product of two independent games are the products of their stable sets. Stable sets are not affected by splitting a player into agents such that no path through the game tree includes actions of two agents.

For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game.[39]

A stable set is defined mathematically by (in brief) essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition entails more than the property that every nearby game has a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above.

Social choice theory and relative utilitarianism

A Social Welfare Function (SWF) maps profiles of individual preferences to social preferences over a fixed set of alternatives. In a seminal paper Arrow (1950)[40] showed the famous "Impossibility Theorem", i.e. there does not exist an SWF that satisfies a very minimal system of axioms: Unrestricted Domain, Independence of Irrelevant Alternatives, the Pareto criterion and Non-dictatorship. A large literature documents various ways to relax Arrow's axioms to get possibility results. Relative Utilitarianism (RU) (Dhillon and Mertens, 1999)[41] is a SWF that consists of normalizing individual utilities between 0 and 1 and adding them, and is a "possibility" result that is derived from a system of axioms that are very close to Arrow's original ones but modified for the space of preferences over lotteries. Unlike classical Utilitarianism, RU does not assume cardinal utility or interpersonal comparibility. Starting from individual preferences over lotteries, which are assumed to satisfy the von-Neumann–Morgenstern axioms (or equivalent), the axiom system uniquely fixes the interpersonal comparisons. The theorem can be interpreted as providing an axiomatic foundation for the "right" interpersonal comparisons, a problem that has plagued social choice theory for a long time. The axioms are:

  • Individualism: If all individuals are indifferent between all alternatives then so is society,
  • Non Triviality: The SWF is not constantly totally indifferent between all alternatives,
  • No Ill will: It is not true that when all individuals but one are totally indifferent then society's preferences are opposite to his,
  • Anonymity: A permutation of all individuals leaves the social preferences unchanged.
  • Independence of Redundant Alternatives: This axiom restricts Arrow's Independence of Irrelevant Alternatives (IIA) to the case where both before and after the change, the "irrelevant" alternatives are lotteries on the other alternatives.
  • Monotonicity is much weaker than the following "good will axiom": Consider two lotteries p and q and two preference profiles which coincide for all individuals except i, i is indifferent between p and q on the first profile but strictly prefers p to q in the second profile, then society strictly prefers p to q in the second profile as well.
  • Finally the Continuity axiom is basically a closed graph property taking the strongest possible convergence for preference profiles.

The main theorem shows that RU satisfies all the axioms and if the number of individuals is bigger than three, number of candidates is bigger than 5 then any SWF satisfying the above axioms is equivalent to RU, whenever there exist at least 2 individuals who do not have exactly the same or exactly the opposite preferences.

Intergenerational equity in policy evaluation

Relative utilitarianism[41] can serve to rationalize using 2% as an intergenerationally fair social discount rate for cost-benefit analysis. Mertens and Rubinchik[42] show that a shift-invariant welfare function defined on a rich space of (temporary) policies, if differentiable, has as a derivative a discounted sum of the policy (change), with a fixed discount rate, i.e., the induced social discount rate. (Shift-invariance requires a function evaluated on a shifted policy to return an affine transformation of the value of the original policy, while the coefficients depend on the time-shift only.) In an overlapping generations model with exogenous growth (with time being the whole real line) relative utilitarian function is shift-invariant when evaluated on (small temporary) policies around a balanced growth equilibrium (with capital stock growing exponentially). When policies are represented as changes in endowments of individuals (transfers or taxes), and utilities of all generations are weighted equally, the social discount rate induced by relative utilitarianism is the growth rate of per capita GDP (2% in the U.S.[43]). This is also consistent with the current practices described in the Circular A-4 of the US Office of Management and Budget, stating:

If your rule will have important intergenerational benefits or costs you might consider a further sensitivity analysis using a lower but positive discount rate in addition to calculating net benefits using discount rates of 3 and 7 percent.[44]

References

  1. 1.0 1.1 Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Mertens, Jean-François, 1992. "Essential Maps and Manifolds," Proceedings of the American Mathematical Society, 115(2), 1992.
  4. Mertens, Jean-François, 2003. "Localization of the Degree on Lower-dimensional Sets," International Journal of Game Theory, 32: 379–386. [1]
  5. Mertens, Jean-François, 1982. "Repeated Games: An Overview of the Zero-sum Case," Advances in Economic Theory, edited by W. Hildenbrand, Cambridge University Press, London and New York.
  6. Mertens, Jean-François, 1986. "Repeated Games," International Congress of Mathematicians. [2]
  7. Mertens, Jean-François, and Sylvain Sorin, and Shmuel Zamir, 1994. "Repeated Games," Parts A, B, C; Discussion Papers 1994020, 1994021, 1994022; Université Catholique de Louvain, Center for Operations Research and Econometrics (CORE). [3] [4]
  8. Mertens, Jean-François, and Shmuel Zamir, 1985. "Formulation of Bayesian analysis for games with incomplete information," International Journal of Game Theory, 14(1): 1–29. [5]
  9. An exposition for the general reader is by Shmuel Zamir, 2008: "Bayesian games: Games with incomplete information," Discussion Paper 486, Center for Rationality, Hebrew University.[6]
  10. A popular version in the form of a sequence of dreams about dreams appears in the film "Inception." [7] The logical aspects of players' beliefs about others' beliefs is related to players' knowledge about others' knowledge; see Prisoners and hats puzzle for an entertaining example, and Common knowledge (logic) for another example and a precise definition.
  11. Aumann, R. J., and Maschler, M. 1995. Repeated Games with Incomplete Information. Cambridge London: MIT Press [8]
  12. Sorin S (2002a) A first course on zero-sum repeated games. Springer, Berlin
  13. Mertens J-F (1987) Repeated games. In: Proceedings of the international congress of mathematicians, Berkeley 1986. American Mathematical Society, Providence, pp 1528–1577
  14. Mertens J-F (1972) The value of two-person zero-sum repeated games: the extensive case. Int J Game Theory 1:217–227
  15. Mertens J-F, Zamir S (1971) The value of two-person zero-sum repeated games with lack of information on both sides. Int J Game Theory 1:39–64
  16. Cardaliaguet P (2007) Differential games with asymmetric information. SIAM J Control Optim 46:816–838
  17. De Meyer B (1996a) Repeated games and partial differential equations. Math Oper Res 21:209–236
  18. De Meyer B. (1999), From repeated games to Brownian games, 'Annales de l'Institut Henri Poincaré, Probabilites et Statistiques', 35, 1–48.
  19. Mertens J.-F. (1998), The speed of convergence in repeated games with incomplete information on one side, 'International Journal of Game Theory', 27, 343–359.
  20. Mertens J.-F. and S. Zamir (1976b), The normal distribution and repeated games, 'International Journal of Game Theory', 5, 187–197.
  21. De Meyer B (1996b) Repeated games, duality and the Central Limit theorem. Math Oper Res 21:237– 251
  22. Mertens J-F, Zamir S (1976a) On a repeated game without a recursive structure. Int J Game Theory 5:173–182
  23. Sorin S (1989) On repeated games without a recursive structure: existence of \lim v_n. Int J Game Theory 18:45–55
  24. Lua error in package.lua at line 80: module 'strict' not found.
  25. Blackwell and Ferguson,1968. "The Big Match", Ann. Math. Statist. Volume 39, Number 1 (1968), 159–163.[9]
  26. Mertens, Jean-François, and Abraham Neyman, 1981. "Stochastic Games," International Journal of Game Theory, 10: 53–66.
  27. Mertens, J-F., Parthasarathy, T.P. 2003. Equilibria for discounted stochastic games. In Neyman A, Sorin S, editors, Stochastic Games and Applications, Kluwer Academic Publishers, 131–172.
  28. The limit-price mechanism
  29. Mertens, Jean-François, 1980. "Values and Derivatives," Mathematics of Operations Research, 5: 523–552. [10]
  30. Mertens, Jean-François, 1988. The Shapley Value in the Non Differentiable Case," International Journal of Game Theory, 17: 1–65. [11]
  31. Neyman, A., 2002. Value of Games with infinitely many Players, "Handbook of Game Theory with Economic Applications," Handbook of Game Theory with Economic Applications, Elsevier, edition 1, volume 3, number 3, 00. R.J. Aumann & S. Hart (ed.).[12]
  32. Govindan, Srihari, and Robert Wilson, 2008. "Refinements of Nash Equilibrium," The New Palgrave Dictionary of Economics, 2nd Edition.[13] [14]
  33. Govindan, Srihari, and Robert Wilson, 2009. "On Forward Induction," Econometrica, 77(1): 1–28. [15] [16]
  34. Kohlberg, Elon, and Jean-François Mertens, 1986. "On the Strategic Stability of Equilibria," Econometrica, 54(5): 1003–1037. [17]
  35. Mertens, Jean-François, 2003. "Ordinality in Non Cooperative Games," International Journal of Game Theory, 32: 387–430. [18]
  36. Mertens, Jean-François, 1992. "The Small Worlds Axiom for Stable Equilibria," Games and Economic Behavior, 4: 553–564. [19]
  37. Mertens, Jean-François, 1989, and 1991. "Stable Equilibria – A Reformulation," Mathematics of Operations Research, 14: 575–625 and 16: 694–753. [20]
  38. Govindan, Srihari, and Jean-François Mertens, 2004. "An Equivalent Definition of Stable Equilibria," International Journal of Game Theory, 32(3): 339–357. [21] [22]
  39. Govindan, Srihari, and Robert Wilson, 2012. "Axiomatic Theory of Equilibrium Selection for Generic Two-Player Games," Econometrica, 70. [23]
  40. Arrow, K.J., "A Difficulty in the Concept of Social Welfare", Journal of Political Economy 58(4) (August, 1950), pp. 328–346
  41. 41.0 41.1 Dhillon, A. and J.F.Mertens, "Relative Utilitarianism", Econometrica 67,3 (May 1999) 471–498
  42. Lua error in package.lua at line 80: module 'strict' not found.
  43. Lua error in package.lua at line 80: module 'strict' not found.
  44. Lua error in package.lua at line 80: module 'strict' not found.