Site MapHelpFeedbackResearch Methods in Psychology: Learning Objectives
Learning Objectives
(See related pages)



After studying Chapter 13, you should know and understand the following key points:

Null Hypothesis Significance Testing (NHST)

Null hypothesis testing is used to determine whether mean differences among groups in an experiment are greater than the differences that are expected simply because of error variation.

The first step in null hypothesis testing is to assume that the groups do not differ-that is that the independent variable did not have an effect (the null hypothesis).

Probability theory is used to estimate the likelihood of the experiment's observed outcome, assuming the null hypothesis is true.

A statistically significant outcome is one that has a small likelihood of occurring if the null hypothesis is true.

Because decisions about the outcome of an experiment are based on probabilities, Type I (rejecting a true null hypothesis) or Type II (failing to reject a false null hypothesis) errors may occur.

Experimental Sensitivity and Statistical Power

Sensitivity refers to the likelihood that an experiment will detect the effect of an independent variable when, in fact, the independent variable truly has an effect.

Power refers to the likelihood that a statistical test will allow researchers to reject correctly the null hypothesis of no group differences.

The power of statistical tests is influenced by the level of statistical significance, the size of the treatment effect, and the sample size.

The primary means for researchers to increase statistical power is increasing sample size.

Repeated measures designs are likely to be more sensitive and to have more statistical power than independent groups designs because estimates of error variation are likely to be smaller in repeated measures designs.

Type II errors are more common in psychological research using NHST than are Type I errors.

When results are not statistically significant (i.e., p > .05), it is incorrect to conclude that the null hypothesis is true.

NHST: Comparing Two Means

The appropriate inferential test when comparing two means obtained from different groups of subjects is a t test for independent groups.

A measure of effect size should always be reported when NHST is used.

The appropriate inferential test when comparing two means obtained from the same subjects (or matched groups) is a repeated measures (within-subjects) t test.

Statistical Significance and Scientific or Practical Significance

We must recognize the fact that statistical significance is not the same as scientific significance.

We also must acknowledge that statistical significance is not the same as practical or clinical significance.

ANOVA for Single-factor Independent Groups Design

Analysis of variance (ANOVA) is an inferential statistics test used to determine whether an independent variable has had a statistically significant effect on a dependent variable.

The logic of analysis of variance is based on identifying sources of error variation and systematic variation in the data.

The F-test is a statistic that represents the ratio of between-group variation to within-group variation in the data.

The results of the initial overall analysis of an omnibus F-test are presented in an analysis of variance summary table; analytical comparisons can then be used to identify specific sources of systematic variation in an experiment.

Although analysis of variance can be used to decide whether an independent variable has had a statistically significant effect, researchers examine the descriptive statistics to interpret the meaning of the experiment's outcome.

Effect size measures for independent groups designs include eta squared and Cohen's f.

A power analysis for independent groups designs should be conducted prior to implementing the study in order to determine the probability of finding a statistically significant effect and power should be reported whenever nonsignificant results based on NHST are found.

Analytical comparisons may be carried out to identify specific sources of systematic variation contributing to a statistically significant omnibus F-test.

Repeated Measures Analysis Of Variance

The general procedures and logic for null hypothesis testing using repeated measures analysis of variance are similar to those used for independent groups analysis of variance.

Before beginning the analysis of variance for a complete repeated measures design, a summary score (e.g., mean, median) for each participant must be computed for each condition.

Descriptive data are calculated to summarize performance for each condition of the independent variable across all participants.

The primary way that analysis of variance differs for repeated measures is in the estimation of error variation, or residual variation; residual variation is the variation that remains when systematic variation due to the independent variable and participants is removed from the estimate of total variation.

Two-Factor Analysis of Variance for Independent Groups Designs

Analysis of a Complex Design with an Interaction
If the omnibus analysis of variance reveals a statistically significant interaction, the source of the interaction is identified using simple main effects analyses and tests of simple comparisons.

A simple main effect is the effect of one independent variable at one level of a second independent variable.

If an independent variable has three or more levels, simple comparisons can be used to examine the source of a simple main effect by comparing means two at a time.

Confidence intervals may be drawn around group means to provide information regarding the precision of estimation of population means.

Analysis with No Interaction

If an omnibus analysis of variance indicates the interaction between independent variables is not statistically significant, the next step is to determine whether the main effects of the variables are statistically significant.

The source of a statistically significant main effect can be specified more precisely by performing analytical comparisons that compare means two at a time and by constructing confidence intervals.







Research Methods in PsychologyOnline Learning Center with Powerweb

Home > Chapter 13 > Learning Objectives