Cramér–von Mises criterion

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function F^* compared to a given empirical distribution function F_n, or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as

\omega^2 = \int_{-\infty}^{\infty} [F_n(x)-F^*(x)]^2\,\mathrm{d}F^*(x)

In one-sample applications F^* is the theoretical distribution and F_n is the empirically observed distribution. Alternatively the two distributions can both be empirically estimated ones; this is called the two-sample case.

The criterion is named after Harald Cramér and Richard Edler von Mises who first proposed it in 1928–1930.[1][2] The generalization to two samples is due to Anderson.[3]

The Cramér–von Mises test is an alternative to the Kolmogorov–Smirnov test.

Cramér–von Mises test (one sample)

Let x_1,x_2,\cdots,x_n be the observed values, in increasing order. Then the statistic is[3]:1153[4]

T = n \omega^2 = \frac{1}{12n} + \sum_{i=1}^n \left[ \frac{2i-1}{2n}-F(x_i) \right]^2.

If this value is larger than the tabulated value, then the hypothesis that the data come from the distribution F can be rejected.

Watson test

A modified version of the Cramér–von Mises test is the Watson test[5] which uses the statistic U2, where[4]

U^2= T-n( \bar{F}-\tfrac{1}{2} )^2,

where

\bar{F}=\frac{1}{n} \sum F(x_i).

Cramér–von Mises test (two samples)

Let x_1,x_2,\cdots,x_N and y_1,y_2,\cdots,y_M be the observed values in the first and second sample respectively, in increasing order. Let r_1,r_2,\cdots,r_N be the ranks of the x's in the combined sample, and let s_1,s_2,\cdots,s_M be the ranks of the y's in the combined sample. Anderson[3]:1149 shows that

T = N \omega^2 = \frac{U}{N M (N+M)}-\frac{4 M N - 1}{6(M+N)}

where U is defined as

U = N \sum_{i=1}^N (r_i-i)^2 + M \sum_{j=1}^M (s_j-j)^2

If the value of T is larger than the tabulated values,[3]:1154–1159 the hypothesis that the two samples come from the same distribution can be rejected. (Some books[specify] give critical values for U, which is more convenient, as it avoids the need to compute T via the expression above. The conclusion will be the same).

The above assumes there are no duplicates in the x, y, and r sequences. So x_i is unique, and its rank is i in the sorted list x_1,...x_N. If there are duplicates, and x_i through x_j are a run of identical values in the sorted list, then one common approach is the midrank[6] method: assign each duplicate a "rank" of (i+j)/2. In the above equations, in the expressions (r_i-i)^2 and (s_j-j)^2, duplicates can modify all four variables r_i, i, s_j, and j.

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 3.2 3.3 Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 Pearson, E.S., Hartley, H.O. (1972) Biometrika Tables for Statisticians, Volume 2, CUP. ISBN 0-521-06937-8 (page 118 and Table 54)
  5. Watson, G.S. (1961) "Goodness-Of-Fit Tests on a Circle", Biometrika, 48 (1/2), 109-114 JSTOR 2333135
  6. Ruymgaart, F. H., (1980) "A unified approach to the asymptotic distribution theory of certain midrank statistics". In: Statistique non Parametrique Asymptotique, 1±18, J. P. Raoult (Ed.), Lecture Notes on Mathematics, No. 821, Springer, Berlin.
  • Lua error in package.lua at line 80: module 'strict' not found.

Further reading

  • Lua error in package.lua at line 80: module 'strict' not found.

External links