Prékopa–Leindler inequality

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.

Statement of the inequality

Let 0 < λ < 1 and let f, g, h : Rn → [0, +∞) be non-negative real-valued measurable functions defined on n-dimensional Euclidean space Rn. Suppose that these functions satisfy

h \left( (1-\lambda)x + \lambda y \right) \geq f(x)^{1 - \lambda} g(y)^\lambda

 

 

 

 

(1)

for all x and y in Rn. Then

\| h\|_{1} := \int_{\mathbb{R}^n} h(x) \, \mathrm{d} x \geq \left( \int_{\mathbb{R}^n} f(x) \, \mathrm{d} x \right)^{1 -\lambda} \left( \int_{\mathbb{R}^n} g(x) \, \mathrm{d} x \right)^\lambda =: \| f\|_1^{1 -\lambda} \| g\|_1^\lambda. \,

Essential form of the inequality

Recall that the essential supremum of a measurable function f : Rn → R is defined by

Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathop{\mathrm{ess\,sup}}_{x \in \mathbb{R}^{n}} f(x) = \inf \left\{ t \in [- \infty, + \infty] \mid f(x) \leq t \text{ for almost all } x \in \mathbb{R}^{n} \right\}.


This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f, g ∈ L1(Rn; [0, +∞)) be non-negative absolutely integrable functions. Let

s(x) = \mathop{\mathrm{ess\,sup}}_{y \in \mathbb{R}^n} f \left( \frac{x - y}{1 - \lambda} \right)^{1 - \lambda} g \left( \frac{y}{\lambda} \right)^\lambda.

Then s is measurable and

\| s \|_1 \geq \| f \|_1^{1 - \lambda} \| g \|_1^\lambda.

The essential supremum form was given in.[1] Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form.

Relationship to the Brunn–Minkowski inequality

It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded, measurable subsets of Rn such that the Minkowski sum (1 − λ)A + λB is also measurable, then

\mu \left( (1 - \lambda) A + \lambda B \right) \geq \mu (A)^{1 - \lambda} \mu (B)^{\lambda},

where μ denotes n-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used[2] to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non-empty, bounded, measurable subsets of Rn such that (1 − λ)A + λB is also measurable, then

\mu \left( (1 - \lambda) A + \lambda B \right)^{1 / n} \geq (1 - \lambda) \mu (A)^{1 / n} + \lambda \mu (B)^{1 / n}.

Applications in probability and statistics

The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Suppose that H(x,y) is a log-concave distribution for (x,y) ∈ Rm × Rn, so that by definition we have

H \left( (1 - \lambda)(x_1,y_1) + \lambda (x_2,y_2) \right) \geq H(x_1,y_1)^{1 - \lambda}  H(x_2,y_2)^{\lambda},

 

 

 

 

(2)

and let M(y) denote the marginal distribution obtained by integrating over x:

M(y) = \int_{\mathbb{R}^m} H(x,y) \, dx.

Let y1, y2Rn and 0 < λ < 1 be given. Then equation (2) satisfies condition (1) with h(x) = H(x,(1 − λ)y1 + λy2), f(x) = H(x,y1) and g(x) = H(x,y2), so the Prékopa–Leindler inequality applies. It can be written in terms of M as

M((1-\lambda) y_1 + \lambda y_2) \geq M(y_1)^{1-\lambda} M(y_2)^\lambda,

which is the definition of log-concavity for M.

To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of (X,Y) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of (X + YX − Y) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of (X + YX − Y), we conclude that X + Y has a log-concave distribution.

Notes

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Gardner, Richard J. (2002). "The Brunn–Minkowski inequality". Bull. Amer. Math. Soc. (N.S.) 39 (3): pp. 355–405 (electronic). doi:10.1090/S0273-0979-02-00941-2. ISSN 0273-0979.

References

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.