Mallows's Cp

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In statistics, Mallows's Cp,[1][2] named for Colin Lingwood Mallows, is used to assess the fit of a regression model that has been estimated using ordinary least squares. It is applied in the context of model selection, where a number of predictor variables are available for predicting some outcome, and the goal is to find the best model involving a subset of these predictors. A small value of Cp means that the model is relatively precise.

Mallows's Cp has been shown to be equivalent to Akaike information criterion in the special case of Gaussian linear regression.[3]

Definition and properties

Mallows's Cp addresses the issue of overfitting, in which model selection statistics such as the residual sum of squares always get smaller as more variables are added to a model. Thus, if we aim to select the model giving the smallest residual sum of squares, the model including all variables would always be selected. The Cp statistic calculated on a sample of data estimates the mean squared prediction error (MSPE) as its population target


E\sum_j (\hat{Y}_j - E(Y_j\mid X_j))^2/\sigma^2,

where \hat{Y}_j is the fitted value from the regression model for the jth case, E(Yj | Xj) is the expected value for the jth case, and σ2 is the error variance (assumed constant across the cases). The MSPE will not automatically get smaller as more variables are added. The optimum model under this criterion is a compromise influenced by the sample size, the effect sizes of the different predictors, and the degree of collinearity between them.

If P regressors are selected from a set of K > P, the Cp statistic for that particular set of regressors is defined as:

 C_p={SSE_p \over S^2} - N + 2P,

where

Alternative Definition[4]

Given a linear model such as:

 Y = \beta_0 + \beta_1X_1+...+\beta_pX_p + \epsilon

where

  •  \beta_0...p are coefficients for predictor variables  X_1...p
  •  \epsilon represents error

Cp can also be defined , as:

 C_p=\frac{1}{n}(RSS + 2d\hat{\sigma}^2)

where

  • RSS is the residual sum of squares on a training set of data
  • d is the number of predictors
  • and  \hat{\sigma}^2 refers to the an estimate of the variance associated with each response in the linear model.

Note that the model with the smallest Cp from this definition will also be the same model with the smallest Cp from the earlier definition.

Limitations of Cp

The Cp criterion suffers from two main limitations[5]

  1. the Cp approximation is only valid for large sample size;
  2. the Cp cannot handle complex collections of models as in the variable selection (or feature selection) problem.[5]

Practical use

The Cp statistic is often used as a stopping rule for various forms of stepwise regression. Mallows proposed the statistic as a criterion for selecting among many alternative subset regressions. Under a model not suffering from appreciable lack of fit (bias), Cp has expectation nearly equal to P; otherwise the expectation is roughly P plus a positive bias term. Nevertheless, even though it has expectation greater than or equal to P, there is nothing to prevent Cp < P or even Cp < 0 in extreme cases. It is suggested that one should choose a subset that has Cp approaching P,[6] from above, for a list of subsets ordered by increasing P. In practice, the positive bias can be adjusted for by selecting a model from the ordered list of subsets, such that Cp < 2P.

Since the sample-based Cp statistic is an estimate of the MSPE, using Cp for model selection does not completely guard against overfitting. For instance, it is possible that the selected model will be one in which the sample Cp was a particularly severe underestimate of the MSPE.

Model selection statistics such as Cp are generally not used blindly, but rather information about the field of application, the intended use of the model, and any known biases in the data are taken into account in the process of model selection.

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. 5.0 5.1 Giraud, C. (2015), Introdution to high-dimensional statistics, Chapman & Hall/CRC, ISBN 9781482237948
  6. Lua error in package.lua at line 80: module 'strict' not found.

Further reading

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.