Total variation

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

File:Total variation.gif
As the green ball travels on the graph of the given function, the length of the path travelled by that ball's projection on the y-axis, shown as a red ball, is the total variation of the function.

In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ ℝ, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b].

Historical note

The concept of total variation for functions of one real variable was first introduced by Camille Jordan in the paper (Jordan 1881).[1] He used the new concept in order to prove a convergence theorem for Fourier series of discontinuous periodic functions whose variation is bounded. The extension of the concept to functions of more than one variable however is not simple for various reasons.

Definitions

Total variation for functions of one real variable

Definition 1.1. The total variation of a real-valued (or more generally complex-valued) function f, defined on an interval  [a , b] \subset \mathbb{R} is the quantity

 V^a_b(f)=\sup_{\mathcal{P}} \sum_{i=0}^{n_P-1} | f(x_{i+1})-f(x_i) |, \,

where the supremum runs over the set of all partitions  \scriptstyle \mathcal{P} =\left\{P=\{ x_0, \dots , x_{n_P}\}|P\text{ is a partition of } [a,b] \right\} of the given interval.

Total variation for functions of n > 1 real variables

Definition 1.2. Let Ω be an open subset of ℝn. Given a function f belonging to L1(Ω), the total variation of f in Ω is defined as

 V(f,\Omega):=\sup\left\{\int_\Omega f(x)\operatorname{div}\phi(x)\,\mathrm{d}x\colon \phi\in  C_c^1(\Omega,\mathbb{R}^n),\ \Vert \phi\Vert_{L^\infty(\Omega)}\le 1\right\},

where  \scriptstyle C_c^1(\Omega,\mathbb{R}^n) is the set of continuously differentiable vector functions of compact support contained in \Omega, and  \scriptstyle \Vert\;\Vert_{L^\infty(\Omega)} is the essential supremum norm. Note that this definition does not require that the domain \Omega \subseteq \mathbb{R}^n of the given function is a bounded set.

Total variation in measure theory

Classical total variation definition

Following Saks (1937, p. 10), consider a signed measure \mu on a measurable space (X,\Sigma): then it is possible to define two set functions \scriptstyle\overline{\mathrm{W}}(\mu,\cdot) and \scriptstyle\underline{\mathrm{W}}(\mu,\cdot), respectively called upper variation and lower variation, as follows

\overline{\mathrm{W}}(\mu,E)=\sup\left\{\mu(A)\mid A\in\Sigma\text{ and }A\subset E \right\}\qquad\forall E\in\Sigma
\underline{\mathrm{W}}(\mu,E)=\inf\left\{\mu(A)\mid A\in\Sigma\text{ and }A\subset E \right\}\qquad\forall E\in\Sigma

clearly

\overline{\mathrm{W}}(\mu,E)\geq 0\geq \underline{\mathrm{W}}(\mu,E)\qquad\forall E\in\Sigma

Definition 1.3. The variation (also called absolute variation) of the signed measure \mu is the set function

|\mu|(E)=\overline{\mathrm{W}}(\mu,E)+\left|\underline{\mathrm{W}}(\mu,E)\right|\qquad\forall E\in\Sigma

and its total variation is defined as the value of this measure on the whole space of definition, i.e.

\|\mu\|=|\mu|(X)

Modern definition of total variation norm

Saks (1937, p. 11) uses upper and lower variations to prove the Hahn–Jordan decomposition: according to his version of this theorem, the upper and lower variation are respectively a non-negative and a non-positive measure. Using a more modern notation, define

\mu^+(\cdot)=\overline{\mathrm{W}}(\mu,\cdot)\,,
\mu^-(\cdot)=-\underline{\mathrm{W}}(\mu,\cdot)\,,

Then \mu^+ and \mu^- are two non-negative measures such that

\mu=\mu^+-\mu^-\,
|\mu|=\mu^++\mu^-\,

The last measure is sometimes called, by abuse of notation, total variation measure.

Total variation norm of complex measures

If the measure \mu is complex-valued i.e. is a complex measure, its upper and lower variation cannot be defined and the Hahn–Jordan decomposition theorem can only be applied to its real and imaginary parts. However, it is possible to follow Rudin (1966, pp. 137–139) and define the total variation of the complex-valued measure \mu as follows

Definition 1.4. The variation of the complex-valued measure \mu is the set function

|\mu|(E)=\sup_\pi \sum_{A\isin\pi} |\mu(A)|\qquad\forall E\in\Sigma

where the supremum is taken over all partitions \pi of a measurable set E into a finite number of disjoint measurable subsets.

This definition coincides with the above definition |\mu|=\mu^++\mu^-\, for the case of real-valued signed measures.

Total variation norm of vector-valued measures

The variation so defined is a positive measure (see Rudin (1966, p. 139)) and coincides with the one defined by 1.3 when \mu is a signed measure: its total variation is defined as above. This definition works also if \mu is a vector measure: the variation is then defined by the following formula

|\mu|(E) = \sup_\pi \sum_{A\isin\pi} \|\mu(A)\|\qquad\forall E\in\Sigma

where the supremum is as above. Note also that this definition is slightly more general than the one given by Rudin (1966, p. 138) since it requires only to consider finite partitions of the space X: this implies that it can be used also to define the total variation on finite-additive measures.

Total variation of probability measures

Lua error in package.lua at line 80: module 'strict' not found.

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

The total variation of any probability measure is exactly one, therefore it is not interesting as a means of investigating the properties of such measures. However, when μ and ν are probability measures, the total variation distance of probability measures can be defined as \| \mu - \nu \| where the norm is the total variation norm of signed measures. Using the property that (\mu-\nu)(X)=0, we eventually arrive at the equivalent definition

\|\mu-\nu\| = |\mu-\nu|(X)=\sup\left\{\,\left|\mu(A)-\nu(A)\right| : A\in \Sigma\,\right\}

and its values are non-trivial. Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. For a categorical distribution it is possible to write the total variation distance as follows

\delta(\mu,\nu) = \frac 1 2 \sum_x \left| \mu(x) - \nu(x) \right|\;.

The total variational distance for categorical probability distributions is called statistical distance: sometimes, in the definition of this distance, the factor \scriptstyle\frac 1 2 is omitted.

Basic properties

Total variation of differentiable functions

The total variation of a differentiable function f can be expressed as an integral involving the given function instead of as the supremum of the functionals of definitions 1.1 and 1.2.

The form of the total variation of a differentiable function of one variable

Theorem 1. The total variation of a differentiable function f, defined on an interval  [a , b] \subset \mathbb{R}, has the following expression if f' is Riemann integrable

 V^a_b(f) = \int _a^b |f'(x)|\mathrm{d}x

The form of the total variation of a differentiable function of several variables

Theorem 2. Given a differentiable function f defined on a bounded open set \Omega \subseteq \mathbb{R}^n, the total variation of f has the following expression

V(f,\Omega) = \int\limits_\Omega\left|\nabla f(x)\right|\mathrm{d}x

Here |.| denotes the l_2-norm.

Proof

The first step in the proof is to first prove an equality which follows from the Gauss-Ostrogradsky theorem.

Lemma

Under the conditions of the theorem, the following equality holds:

 \int\limits_\Omega f\,\mathrm{div}\varphi = -\int_\Omega\nabla f\cdot\varphi
Proof of the lemma

From the Gauss-Ostrogradsky theorem:

 \int\limits_\Omega \text{div}\mathbf R = \int\limits_{\partial\Omega}\mathbf R\cdot \mathbf n

by substituting \mathbf R:= f\mathbf\varphi, we have:

 \int\limits_\Omega\text{div}\left(f\mathbf\varphi\right) =
\int\limits_{\partial\Omega}\left(f\mathbf\varphi\right)\cdot\mathbf n

where \mathbf\varphi is zero on the border of \Omega by definition:

 \int\limits_\Omega\text{div}\left(f\mathbf\varphi\right)=0
 \int\limits_\Omega \partial_{x_i} \left(f\mathbf\varphi_i\right)=0
 \int\limits_\Omega \mathbf\varphi_i\partial_{x_i} f + f\partial_{x_i}\mathbf\varphi_i=0
 \int\limits_\Omega f\partial_{x_i}\mathbf\varphi_i = - \int\limits_\Omega \mathbf\varphi_i\partial_{x_i} f
 \int\limits_\Omega f\text{div} \mathbf\varphi = - \int\limits_\Omega \mathbf\varphi\cdot\nabla f
Proof of the equality

Under the conditions of the theorem, from the lemma we have:

 \int\limits_\Omega f\text{div} \mathbf\varphi = - \int\limits_\Omega \mathbf\varphi\cdot\nabla f \leq \left| \int\limits_\Omega \mathbf\varphi\cdot\nabla f \right|\leq \int\limits_\Omega \left|\mathbf\varphi\right|\cdot\left|\nabla f\right|\leq \int\limits_\Omega \left|\nabla f\right|

in the last part \mathbf\varphi could be omitted, because by definition its essential supremum is at most one.

On the other hand we consider \theta_n:=\mathbb I_{\left[-N,N\right]}\frac{\nabla f}{\left|\nabla f\right|} and \theta^*_n which is the up to \varepsilon approximation of \theta in  C^1_c with the same integral. We can do this since  C^1_c is dense in  L^1 . Now again substituting into the lemma:

\lim\limits_{N\rightarrow\infty}\int\limits_\Omega f\text{div}\theta^*_n =
\lim\limits_{N\rightarrow\infty}\int\limits_\Omega\mathbb I_{\left[-N,N\right]}\nabla f\cdot\frac{\nabla f}{\left|\nabla f\right|}=
\lim\limits_{N\rightarrow\infty}\int\limits_{\mathbb I_{\left[-N,N\right]}} \nabla f\cdot\frac{\nabla f}{\left|\nabla f\right|} = \int\limits_\Omega\left|\nabla f\right|

This means we have a convergent sequence of \int\limits_\Omega f\text{div}\mathbf\varphi that tends to \int\limits_\Omega\left|\nabla f\right| as well as we know that \int\limits_\Omega f\text{div}\mathbf\varphi \leq \int\limits_\Omega\left|\nabla f\right| . q.e.d.

It can be seen from the proof that the supremum is attained when

\varphi\to \frac{-\nabla f}{\left|\nabla f\right|}.

The function f is said to be of bounded variation precisely if its total variation is finite.

Total variation of a measure

The total variation is a norm defined on the space of measures of bounded variation. The space of measures on a σ-algebra of sets is a Banach space, called the ca space, relative to this norm. It is contained in the larger Banach space, called the ba space, consisting of finitely additive (as opposed to countably additive) measures, also with the same norm. The distance function associated to the norm gives rise to the total variation distance between two measures μ and ν.

For finite measures on ℝ, the link between the total variation of a measure μ and the total variation of a function, as described above, goes as follows. Given μ, define a function \scriptstyle\varphi\colon \mathbb{R}\to \mathbb{R} by

\varphi(t) = \mu((-\infty,t])~.

Then, the total variation of the signed measure μ is equal to the total variation, in the above sense, of the function φ. In general, the total variation of a signed measure can be defined using Jordan's decomposition theorem by

\|\mu\|_{TV} = \mu_+(X) + \mu_-(X)~,

for any signed measure μ on a measurable space (X,\Sigma).

Applications

Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). As a functional, total variation finds applications in several branches of mathematics and engineering, like optimal control, numerical analysis, and calculus of variations, where the solution to a certain problem has to minimize its value. As an example, use of the total variation functional is common in the following two kind of problems

See also

Notes

Lua error in package.lua at line 80: module 'strict' not found.

Historical references

  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found. (available at Gallica). This is, according to Boris Golubov, the first paper on functions of bounded variation.
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found.. The paper containing the first proof of Vitali covering theorem.

References

  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found.. Available at Numdam.
  • Lua error in package.lua at line 80: module 'strict' not found.. (available at the Polish Virtual Library of Science). English translation from the original French by Laurence Chisholm Young, with two additional notes by Stefan Banach.
  • Lua error in package.lua at line 80: module 'strict' not found..

External links

One variable

One and more variables

Measure theory

Applications

  • Lua error in package.lua at line 80: module 'strict' not found. (a work dealing with total variation application in denoising problems for image processing).
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..