Post hoc analysis

From Infogalactic: the planetary knowledge core
(Redirected from Post hoc test)
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In the design and analysis of experiments, post hoc analysis (from Latin post hoc, "after this") consists of looking at the data—after the experiment has concluded—for patterns that were not specified a priori. It is sometimes called by critics data dredging to evoke the sense that the more one looks the more likely something will be found. More subtly, each time a pattern in the data is considered, a statistical test is effectively performed. This greatly inflates the total number of statistical tests and necessitates the use of multiple testing procedures to compensate. However, this is difficult to do precisely and in fact most results of post hoc analyses are reported as they are with unadjusted p-values. These p-values must be interpreted in light of the fact that they are a small and selected subset of a potentially large group of p-values. Results of post hoc analyses should be explicitly labeled as such in reports and publications to avoid misleading readers.

In practice, post hoc analyses are usually concerned with finding patterns and/or relationships between subgroups of sampled populations that would otherwise remain undetected and undiscovered were a scientific community to rely strictly upon a priori statistical methods.[citation needed] Post hoc tests—also known as a posteriori tests—greatly expand the range and capability of methods that can be applied in exploratory research. Post hoc examination strengthens induction by limiting the probability that significant effects will seem to have been discovered between subgroups of a population when none actually exist. As it is, many scientific papers are published without adequate, preventative post hoc control of the type I error rate.[1]

Post hoc analysis is an important procedure without which multivariate hypothesis testing would greatly suffer, rendering the chances of discovering false positives unacceptably high. Ultimately, post hoc testing creates better informed scientists who can therefore formulate better, more efficient a priori hypotheses and research designs.

Relationship with the multiple comparisons problem

In its most literal and narrow sense, post hoc analysis simply refers to unplanned data analysis performed after the data is collected in order to reach further conclusions. In this sense, even a test that does not provide Type I Error Rate[1] protection, using multiple comparisons methods, is considered as post hoc analysis. A good example is performing initially unplanned multiple t-tests at level \alpha\,\!, following an \alpha\,\! level anova test. Such post hoc analysis does not include multiple testing procedures, which are sometimes difficult to perform precisely. Unfortunately, analyses such as the above are still commonly conducted and their results reported with unadjusted p-values. Results of post hoc analyses which do not address the multiple comparisons problem should be explicitly labeled as such to avoid misleading readers.

In the wider and more useful sense, post hoc analysis tests enable protection from the multiple comparisons problem, whether the inferences made are selective or simultaneous. The type of inference is related directly to the hypotheses family of interest. Simultaneous inference indicates that all inferences, in the family of all hypotheses, are jointly corrected up to a specified type I error rate. In practice, post hoc analyses are usually concerned with finding patterns and/or relationships between subgroups of sampled populations that would otherwise remain undetected and undiscovered were a scientific community to rely strictly upon a priori statistical methods[citation needed]. Therefore, simultaneous inference may be too conservative for certain large scale problems that are currently being addressed by science. For such problems, a selective inference approach might be more suitable, since it assumes that sub-groups of hypotheses from the large scale group can be viewed as a family. Selective post hoc examination strengthens induction by limiting the probability that significant differences will seem to have been discovered between sub-groups of a population when none actually exist. Accordingly, p-values of such sub-groups must be interpreted in light of the fact that they are a small and selected subset of a potentially large group of p-values.

List of post hoc tests

The following are referred to as "post hoc tests". However, on some occasions a researcher may have initially planned on using them, thus referring to them as "post-hoc tests" is not entirely accurate. For instance, The Newman–Keuls and Tukey's methods are often referred to as post hoc. However, it is not uncommon to plan on testing all pairwise comparisons before seeing the data. Therefore, in such cases, these tests are better categorized as a priori.

Fisher's least significant difference (LSD)[2]

This technique was developed by Ronald Fisher in 1935 and is used most commonly after a null hypothesis in an analysis of variance (ANOVA) test is rejected (assuming normality and homogeneity of variances). A significant ANOVA test only reveals that not all the means compared in the test are equal. Fisher's LSD is basically a set of individual t-tests, differentiated only in the calculation of the standard deviation. In each t-test, a pooled standard deviation is computed from only the two groups being compared, while the Fisher's LSD test computes the pooled standard deviation from all groups - thus increasing power. Fisher's LSD does not correct for multiple comparisons.

The Bonferroni procedure

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

  • Denote by p_{i} the p-value for testing H_{i}
  • reject H_{i} if  p_{i} \leq \frac{\alpha}{m}
  • m being the number of hypotheses

Although mainly used with planned contrasts, it can be used as a post hoc test for comparisons between data groups of interest in the experiment after the fact. It is flexible and very simple to compute, but naive in its idea of retaining of familywise error rate by division by m. This method results in a large reduction in the power of the test. That is, because the cut-off value is reduced, it becomes substantially more difficult for any result to be concluded as being statistically significant, irrespective of whether it is true or not.

Holm–Bonferroni method

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

  • Start by ordering the p-values P_{(1)} \ldots P_{(m)} and let the associated hypotheses be H_{(1)} \ldots H_{(m)}
  • Let R be the smallest k such that P_{(k)} > \frac{\alpha}{m+1-k}
  • Reject the null hypotheses H_{(1)} \ldots H_{(R-1)}. If R = 1 then none of the hypotheses are rejected.
  • This procedure is uniformly better than Bonferroni's.
  • It is worth noticing here that the reason why this procedure controls the family-wise error rate for all the m hypotheses at level α in the strong sense is because it is essentially a closed testing procedure. As such, each intersection is tested using the simple Bonferroni test.

The Bonferroni-Holm method introduces a correction to Bonferroni's method that allows more rejections, and is therefore less conservative and more powerful than the Bonferroni method.

Newman–Keuls method

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

A stepwise multiple comparisons procedure used to identify sample means that are significantly different from each other. It is used often as a post hoc test whenever a significant difference between three or more sample means has been revealed by an analysis of variance (ANOVA)

Duncan's new multiple range test (MRT)

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Duncan developed this test as a modification of the Newman–Keuls method that would have greater power. Duncan's MRT is especially protective against false negative (Type II) error at the expense of having a greater risk of making false positive (Type I) errors.

Rodger's method

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Rodger's method is a procedure for examining research data post hoc following an 'omnibus' analysis, that is after carrying out an analysis of variance (ANOVA). Rodger's method utilizes a decision-based error rate, arguing that it is not the probability (\alpha) of rejecting H_0 in error that should be controlled, rather it is the average rate of rejecting true null contrasts that should be controlled. Meaning we should control the expected rate (\operatorname{E}\alpha) of true null contrast rejection.

Scheffé's method

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Scheffé's method applies to the set of estimates of all possible contrasts among the factor level means, not just the pairwise differences. Having an advantage of flexibility, it can be used to test any number of post hoc simple and/or complex comparisons that appear interesting. However, the drawback of this flexibility is a low type I error rate, and a low power.

Tukey's procedure

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

  • Tukey's procedure is only applicable for pairwise comparisons.
  • It assumes independence of the observations being tested, as well as equal variation across observations (homoscedasticity).
  • The procedure calculates for each pair the studentized range statistic:  \frac {Y_{A}-Y_{B}} {SE} where Y_{A} is the larger of the two means being compared, Y_{B} is the smaller, and SE is the standard error of the data in question.
  • Tukey's test is essentially a Student's t-test, except that it corrects for family-wise error-rate.

A correction with a similar framework is Fisher’s LSD (least significant difference).

Dunnett's correction

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Charles Dunnett (1955, 1966; not to be confused with Dunn) described an alternative alpha error adjustment when k groups are compared to the same control group. Now known as Dunnett's test, this method is less conservative than the Bonferroni adjustment.

Sidák's inequality

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Benjamini–Hochberg (BH) procedure

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

BH-procedure is a step-up procedure iterating over H_1, \ldots, H_m null hypotheses tested and P_{(1)}, \ldots, P_{(m)}, their ordered p-values in an increasing order. The method then proceeds to identify the rejected null hypotheses from the above set, whilst controlling the false discovery rate (at level \alpha) under the premise that the total m hypotheses are independent or positively correlated.

See also

References

  1. 1.0 1.1 Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.

Bibliography