Brascamp–Lieb inequality

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In mathematics, the Brascamp–Lieb inequality can refer to two inequalities. The first is a result in geometry concerning integrable functions on n-dimensional Euclidean space Rn. It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb.

The geometric inequality

Fix natural numbers m and n. For 1 ≤ i ≤ m, let ni ∈ N and let ci > 0 so that

\sum_{i = 1}^{m} c_{i} n_{i} = n.

Choose non-negative, integrable functions

f_{i} \in L^{1} \left( \mathbb{R}^{n_{i}} ; [0, + \infty] \right)

and surjective linear maps

B_{i} : \mathbb{R}^{n} \to \mathbb{R}^{n_{i}}.

Then the following inequality holds:

\int_{\mathbb{R}^{n}} \prod_{i = 1}^{m} f_{i} \left( B_{i} x \right)^{c_{i}} \, \mathrm{d} x \leq D^{- 1/2} \prod_{i = 1}^{m} \left( \int_{\mathbb{R}^{n_{i}}} f_{i} (y) \, \mathrm{d} y \right)^{c_{i}},

where D is given by

D = \inf \left\{ \left. \frac{\det \left( \sum_{i = 1}^{m} c_{i} B_{i}^{*} A_{i} B_{i} \right)}{\prod_{i = 1}^{m} ( \det A_{i} )^{c_{i}}} \right| A_{i} \mbox{ is a positive-definite } n_{i} \times n_{i} \mbox{ matrix} \right\}.

Another way to state this is that the constant D is what one would obtain by restricting attention to the case in which each f_{i} is a centered Gaussian function, namely f_{i}(y) = \exp \{-(y,\, A_{i}\, y)\}.

This inequality is in [1]

Relationships to other inequalities

The geometric Brascamp–Lieb inequality

The geometric Brascamp–Lieb inequality is a special case of the above, and was used by Ball (1989) to provide upper bounds for volumes of central sections of cubes. This was derived first in.[2]

For i = 1, ..., m, let ci > 0 and let ui ∈ Sn−1 be a unit vector; suppose that that ci and ui satisfy

x = \sum_{i = 1}^{m} c_{i} (x \cdot u_{i}) u_{i}

for all x in Rn. Let fi ∈ L1(R; [0, +∞]) for each i = 1, ..., m. Then

\int_{\mathbb{R}^{n}} \prod_{i = 1}^{m} f_{i} (x \cdot u_{i})^{c_{i}} \, \mathrm{d} x \leq \prod_{i = 1}^{m} \left( \int_{\mathbb{R}} f_{i} (y) \, \mathrm{d} y \right)^{c_{i}}.

The geometric Brascamp–Lieb inequality follows from the Brascamp–Lieb inequality as stated above by taking ni = 1 and Bi(x) = x · ui. Then, for zi ∈ R,

B_{i}^{*} (z_{i}) = z_{i} u_{i}.

It follows that D = 1 in this case.

Hölder's inequality

As another special case, take ni = n, Bi = id, the identity map on Rn, replacing fi by f1/ci
i
, and let ci = 1 / pi for 1 ≤ i ≤ m. Then

\sum_{i = 1}^{m} \frac{1}{p_{i}} = 1

and the log-concavity of the determinant of a positive definite matrix implies that D = 1. This yields Hölder's inequality in Rn:

\int_{\mathbb{R}^{n}} \prod_{i = 1}^{m} f_{i} (x) \, \mathrm{d} x \leq \prod_{i = 1}^{m} \| f_{i} \|_{p_{i}}.

The concentration inequality

Consider a probability density function p(x)=\exp(-\phi(x)).  p(x) is said to be a log-concave measure if the  \phi(x) function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of  p(x) . The Brascamp–Lieb inequality gives another characterization of the compactness of  p(x) by bounding the mean of any statistic  S(x).

Formally, let  S(x) be any derivable function. The Brascamp–Lieb inequality reads:

 \text{var}_p (S(x)) \leq E_p (\nabla^T S(x) [H \phi(x)]^{-1} \nabla S(x))

where H is the Hessian_matrix and \nabla is the Nabla symbol

This theorem was originally derived in.[3] Extensions of the inequality can be found in [4] and.[5]

Relationship with other inequalities

The Brascamp–Lieb inequality is an extension of the Poincaré inequality which only concerns Gaussian probability distributions.

The Brascamp–Lieb inequality is also related to the Cramer-Rao bound. While Brascamp-Lieb is an upper-bound, the Cramer-Rao bound lower-bounds the variance of  \text{var}_p (S(x)) . The expressions are almost identical:

 \text{var}_p (S(x)) \geq E_p (\nabla^T S(x) ) [ E_p( H \phi(x) )]^{-1} E_p( \nabla S(x) )

References

  1. E.H.Lieb, Gaussian Kernels have only Gaussian Maximizers, Inventiones Mathematicae 102, pp. 179–208 (1990).
  2. H.J. Brascamp and E.H. Lieb, Best Constants in Young's Inequality, Its Converse and Its Generalization to More Than Three Functions, Adv. in Math. 20, 151–172 (1976).
  3. H.J. Brascamp and E.H. Lieb, On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation, Journal of functional analysis 22, 366-389 (1976)
  4. Eric A. Carlen, Dario Cordero-Erausquin and Elliott H. Lieb Asymmetric covariance estimates of Brascamp-Lieb type and related inequalities for log-concave measures, Annales de l'institut Henri Poincare (B) Probability and Statistics 49, 1-12, 2013
  5. Gilles Hargé Reinforcement of an inequality due to Brascamp and Lieb, Journal of functional analysis 254, 267-300, 2008
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.