Normal-inverse-gamma distribution

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
normal-inverse-gamma
Parameters \mu\, location (real)
\lambda > 0\, (real)
\alpha > 0\, (real)
\beta > 0\, (real)
Support x \in (-\infty, \infty)\,\!, \; \sigma^2 \in (0,\infty)
PDF \frac {\sqrt{\lambda}} {\sigma\sqrt{2\pi} }  \frac{\beta^\alpha}{\Gamma(\alpha)} \, \left( \frac{1}{\sigma^2} \right)^{\alpha + 1}   e^{ -\frac { 2\beta + \lambda(x - \mu)^2} {2\sigma^2}  }

In probability theory and statistics, the normal-inverse-gamma distribution (or Gaussian-inverse-gamma distribution) is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

Definition

Suppose

  x | \sigma^2, \mu, \lambda\sim \mathrm{N}(\mu,\sigma^2 / \lambda) \,\!

has a normal distribution with mean  \mu and variance  \sigma^2 / \lambda, where

\sigma^2|\alpha, \beta \sim \Gamma^{-1}(\alpha,\beta) \!

has an inverse gamma distribution. Then (x,\sigma^2) has a normal-inverse-gamma distribution, denoted as

 (x,\sigma^2) \sim \text{N-}\Gamma^{-1}(\mu,\lambda,\alpha,\beta) \! .

(\text{NIG} is also used instead of \text{N-}\Gamma^{-1}.)

In a multivariate form of the normal-inverse-gamma distribution,   \mathbf{x} | \sigma^2, \boldsymbol{\mu}, \mathbf{V}^{-1}\sim \mathrm{N}(\boldsymbol{\mu},\sigma^2 \mathbf{V}) \,\! -- that is, conditional on  \sigma^2 ,   \mathbf{x} is a  k \times 1 random vector that follows the multivariate normal distribution with mean  \boldsymbol{\mu} and covariance  \sigma^2\mathbf{V} -- while, as in the univariate case, \sigma^2|\alpha, \beta \sim \Gamma^{-1}(\alpha,\beta) \!.

Characterization

Probability density function

f(x,\sigma^2|\mu,\lambda,\alpha,\beta) =  \frac {\sqrt{\lambda}} {\sigma\sqrt{2\pi} } \, \frac{\beta^\alpha}{\Gamma(\alpha)} \, \left( \frac{1}{\sigma^2} \right)^{\alpha + 1}   \exp \left( -\frac { 2\beta + \lambda(x - \mu)^2} {2\sigma^2}  \right)

For the multivariate form where   \mathbf{x} is a  k \times 1 random vector,

f(\mathbf{x},\sigma^2|\mu,\mathbf{V}^{-1},\alpha,\beta) =  |\mathbf{V}|^{-1/2} {(2\pi)^{-k/2} } \, \frac{\beta^\alpha}{\Gamma(\alpha)} \, \left( \frac{1}{\sigma^2} \right)^{k/2 + \alpha + 1}   \exp \left( -\frac { 2\beta + (\mathbf{x} - \boldsymbol{\mu})' \mathbf{V}^{-1} (\mathbf{x} - \boldsymbol{\mu})} {2\sigma^2}  \right).

where |\mathbf{V}| is the determinant of the  k \times k matrix \mathbf{V}. Note how this last equation reduces to the first form if k = 1 so that \mathbf{x}, \mathbf{V}, \boldsymbol{\mu} are scalars.

Alternative parameterization

It is also possible to let  \gamma = 1 / \lambda in which case the pdf becomes

f(x,\sigma^2|\mu,\gamma,\alpha,\beta) =  \frac {1} {\sigma\sqrt{2\pi\gamma} } \, \frac{\beta^\alpha}{\Gamma(\alpha)} \, \left( \frac{1}{\sigma^2} \right)^{\alpha + 1}   \exp \left( -\frac{2\gamma\beta + (x - \mu)^2}{2\gamma \sigma^2} \right)

In the multivariate form, the corresponding change would be to regard the covariance matrix \mathbf{V} instead of its inverse \mathbf{V}^{-1} as a parameter.

Cumulative distribution function

F(x,\sigma^2|\mu,\lambda,\alpha,\beta) =  \frac{e^{-\frac{\beta }{\sigma ^2}} \left(\frac{\beta }{\sigma ^2}\right)^{\alpha }
   \left(\text{erf}\left(\frac{\sqrt{\lambda } (x-\mu )}{\sqrt{2} \sigma }\right)+1\right)}{2
   \sigma ^2 \Gamma (\alpha )}

Differential equation

The probability density function of the normal-inverse-gamma distribution is a solution to the following differential equation:

\left\{\begin{array}{l}
\sigma ^2 f'(x)+\lambda f(x) (x-\mu )=0, \\
f(0)=\frac{\sqrt{\lambda} \beta ^{\alpha}
   \left(\frac{1}{\sigma ^2}\right)^{\alpha +1}
  e^{\frac{-2 \beta -\lambda  \mu ^2}{2 \sigma^2}}}{\sqrt{2 \pi}
  \sigma  \Gamma (\alpha )}
\end{array}\right\}

Properties

Marginal distributions

Given  (x,\sigma^2) \sim \text{N-}\Gamma^{-1}(\mu,\lambda,\alpha,\beta) \! .
as above, \sigma^2 by itself follows an inverse gamma distribution:

\sigma^2 \sim \Gamma^{-1}(\alpha,\beta) \!

while  \sqrt{\frac{\alpha\lambda}{\beta}} (x - \mu) follows a t distribution with  2 \alpha degrees of freedom.

In the multivariate case, the marginal distribution of \mathbf{x} is a multivariate t distribution:

\mathbf{x} \sim t_{2\alpha}(\boldsymbol{\mu}, \frac{\beta}{\alpha} \mathbf{V}) \!

Summation

Scaling

Exponential family

Information entropy

Kullback-Leibler divergence

Maximum likelihood estimation

Lua error in package.lua at line 80: module 'strict' not found.

Posterior distribution of the parameters

See the articles on normal-gamma distribution and conjugate prior.

Interpretation of the parameters

See the articles on normal-gamma distribution and conjugate prior.

Generating normal-inverse-gamma random variates

Generation of random variates is straightforward:

  1. Sample \sigma^2 from an inverse gamma distribution with parameters \alpha and \beta
  2. Sample x from a normal distribution with mean \mu and variance \sigma^2/\lambda

Related distributions

  • The normal-gamma distribution is the same distribution parameterized by precision rather than variance
  • A generalization of this distribution which allows for a multivariate mean and a completely unknown positive-definite covariance matrix \sigma^2 \mathbf{V} (whereas in the multivariate inverse-gamma distribution the covariance matrix is regarded as known up to the scale factor \sigma^2) is the normal-inverse-Wishart distribution

References

  • Denison, David G. T. ; Holmes, Christopher C.; Mallick, Bani K.; Smith, Adrian F. M. (2002) Bayesian Methods for Nonlinear Classification and Regression, Wiley. ISBN 0471490369
  • Koch, Karl-Rudolf (2007) Introduction to Bayesian Statistics (2nd Edition), Springer. ISBN 354072723X