Jump to content

Analysis of variance

From Wikiversity
Completion status: this resource is ~50% complete.
Educational level: this is a tertiary (university) resource.

ANOVA stands for Analysis of Variance. ANOVA is a family of multivariate statistical technique for helping to infer whether there are real differences between the means of three or more groups or variables in a population, based on sample data. Before tackling this topic, you should be familiar with normal distribution and testing differences.

The main types of ANOVA are listed below. They are all part of the general linear model.

ANOVA models Definitions
t-tests Comparison of means between two groups; if independent groups, then independent samples t-test. If not independent, then paired samples t-test. If comparing one group against a fixed value, then a one-sample t-test.
One-way ANOVA Comparison of means of three or more independent groups.
One-way repeated measures ANOVA Comparison of means of three or more within-subject variables.
Factorial ANOVA Comparison of cell means for two or more between-subject IVs.
Mixed ANOVA
(SPANOVA)
Comparison of cells means for one or more between-subjects IV and one or more within-subjects IV.
ANCOVA Any ANOVA model with a covariate.
MANOVA Any ANOVA model with multiple DVs. Provides omnibus F and separate Fs.

ANOVA models are parametric, relying on assumptions about the distribution of the dependent variables (DVs) for each level of the independent variable(s) (IVs).

Initially the array of assumptions for various types of ANOVA may seem bewildering. In practice, the first two assumptions here are the main ones to check. Note that the larger the sample size, the more robust ANOVA is to violation of the first two assumptions: normality and homoscedasticity (homogeneity of variance).

  1. Normality of the DV distribution: The data in each cell should be approximately normally distributed. Check via histograms, skewness and kurtosis overall and for each cell (i.e. for each group for each DV)
  2. Homogeneity of variance: The variance in each cell should be similar. Check via Levene's test or other homogeneity of variance tests which are generally produced as part of the ANOVA statistical output.
  3. Sample size: per cell > 20 is preferred; aids robustness to violation of the first two assumptions, and a larger sample size increases power
  4. Independent observations: scores on one variable or for one group should not be dependent on another variable or group (usually guaranteed by the design of the study)

These assumptions apply to independent sample t-tests (see also t-test assumptions), one-way ANOVAs and factorial ANOVAs.

For ANOVA models involving repeated measures, there is also the assumptions of:

  1. Sphericity: the difference scores between each within-subject variable have similar variances
  2. Homogeneity of covariance matrices of the depending variables: tests the null hypothesis that the observed covariance matrices of the dependent variables are equal across groups (see Box's M)

Interactions

[edit | edit source]

When two or more IVs combine to have synergistic effects on the DV, an interaction is said to occur. This means that the effect of one IV on the DV is moderated by another IV.

Effect size

[edit | edit source]

Effect sizes should be reported in addition to significance test results for ANOVA. Of note are eta-squared, partial eta-squared and Cohen's d:

  1. Partial eta-squared for each of the main effects and interaction(s) (e.g., via SS formula or SPSS - ANOVA - Options)
  2. (Total) eta-squared (e.g., via SS formula (SS between groups / Total SS); equivalent to R2 (total variance explained), i.e., provides % of variance in the dependent variable explained by the independent variables.
  3. Cohen's d can be calculated, this is for the differences between two means; i.e., pairwise contrasts. So, you might just want to focus on some contrasts e.g., if there's a significant main effect for gender, then compute the Cohen's d for overall motivation for males and females. You can use the spreadsheet from Tutorial 5 or calculate yourself, using http://en.wikipedia.org/wiki/Effect_size#Cohen.27s_d

Recommended further reading: Measures of Effect Size (Strength of Association) for Analysis of Variance (Becker, 1999).

FAQ
Should I report effect sizes even when the F tests are not significant?
checkY Effect size and statistical significance are two different, important pieces of information about an ANOVA. In a high power study, the results may be statistically significant but the size of the effect may be trivial. On the other hand, in a low power study, the results may not statistically significant, but the size of the effects may be small, medium, or even large. Thus, both are important.

Power

[edit | edit source]

Power for ANOVAs can usually be calculated as part of the analysis using statistical software (e.g., SPSS).

Data analysis exercises

[edit | edit source]

See also

[edit | edit source]
[edit | edit source]