Choose the alternative that best completes the stem of each question.
|
1 | | If you drew every possible sample of a given size from a population and calculated a mean for each sample, the distribution of those means is the |
| | A) | sampling distribution of the mean. |
| | B) | standard error of the mean. |
| | C) | degrees of freedom of the mean. |
| | D) | central tendency of the mean. |
|
|
2 | | An assumption underlying parametric statistics is that |
| | A) | sampling was done from a normally distributed population. |
| | B) | your data were measured on a nominal or an ordinal. |
| | C) | your data need not meet any strict requirements. |
| | D) | both a and b |
|
|
3 | | If your independent variable has no effect on the dependent variable, the distributions representing the different groups in your experiment |
| | A) | represent two distinct populations. |
| | B) | are independent samples drawn from the same population. |
| | C) | are probably positively skewed. |
| | D) | are probably negatively skewed. |
|
|
4 | | The hypothesis that says that your sample means were drawn from the same population is the |
| | A) | alternative hypothesis. |
| | B) | central limit hypothesis. |
| | C) | null hypothesis. |
| | D) | post hoc hypothesis. |
|
|
5 | | If the probability that the difference between sample means could have resulted by sampling the same population is sufficiently small, then we say that the difference between means is |
| | A) | not statistically significant. |
| | B) | statistically significant. |
| | C) | valid. |
| | D) | none of the above |
|
|
6 | | Although inferential statistics are designed to help you minimize decision-making errors, errors are still possible. If you decided to reject the null hypothesis when in fact it was true, you are making a |
| | A) | Type II error. |
| | B) | Type I error. |
| | C) | Type III error. |
| | D) | per-comparison error. |
|
|
7 | | If you take steps to minimize a Type I error, then the probability of making a Type II error is |
| | A) | increased. |
| | B) | also decreased. |
| | C) | unaffected. |
| | D) | cut in half. |
|
|
8 | | By convention, alpha has been set at no larger than p < |
| | A) | .10. |
| | B) | .05. |
| | C) | .025. |
| | D) | .01. |
|
|
9 | | According to the text, one-tailed tests should be used |
| | A) | whenever you are unsure what kind of test to use. |
| | B) | in any situation where you cannot predict the direction of an effect. |
| | C) | only if there is some compelling a priori reason not to use a two-tailed test. |
| | D) | when nonparametric statistics are used. |
|
|
10 | | The most appropriate statistical test for an experiment with two independent groups and the dependent variable measured on an interval scale is |
| | A) | the chi-square. |
| | B) | the t test for independent samples. |
| | C) | the one-sample z test. |
| | D) | a two-factor ANOVA. |
|
|
11 | | For an experimental design that goes beyond two groups and a dependent variable measured on an interval scale, the best statistic is the |
| | A) | ANOVA. |
| | B) | t test for correlated samples. |
| | C) | Mann—Whitney U test. |
| | D) | chi-square test. |
|
|
12 | | If you are contemplating doing many post hoc, unplanned comparisons, you must be concerned with |
| | A) | per-comparison error. |
| | B) | beta errors. |
| | C) | familywise error. |
| | D) | probability funneling. |
|
|
13 | | If you have unequal sample sizes, you would use an unweighted means analysis if |
| | A) | your experimental procedure caused the unequal sample sizes. |
| | B) | your experimental procedure did not cause the unequal sample sizes. |
| | C) | the size of the sample in one group did not exceed any of the others by more than three participants. |
| | D) | both a and b |
|
|
14 | | Nonparametric tests |
| | A) | are used only when your data do not meet the assumptions of parametric statistics. |
| | B) | are used if your data do not meet the assumptions of a parametric test, even if your data were scaled on an interval or ratio scale. |
| | C) | are used when your data are scaled on less than an interval scale. |
| | D) | both b and c |
|
|
15 | | The power of a statistical test refers to its |
| | A) | ability to eliminate statistical errors. |
| | B) | ability to analyze data that violate the assumptions of the test. |
| | C) | ability to detect differences between means. |
| | D) | all of the above |
|
|
16 | | The power of a statistical test is affected by |
| | A) | sample size. |
| | B) | the alpha level chosen. |
| | C) | effect size. |
| | D) | all of the above |
| | E) | both a and b only |
|
|
17 | | If one finding is statistically significant at p < .01 and a second at p < .05, it would be logical to say that |
| | A) | finding 1 is more significant than finding 2. |
| | B) | finding 2 is more significant than finding 1. |
| | C) | finding 1 and finding 2 are equally significant. |
| | D) | you can have greater confidence in rejecting the null hypothesis for finding 1 than finding 2. |
|
|
18 | | A data transformation that changes the value of numbers, but not the scale of measurement are called |
| | A) | nonlinear transformations. |
| | B) | geometric transformations. |
| | C) | linear transformations. |
| | D) | simple transformations. |
|
|
19 | | A legitimate reason for transforming your data is |
| | A) | to help a nonsignificant finding become significant. |
| | B) | when your data do not meet assumptions of a parametric statistic and no nonparametric alternative is available. |
| | C) | to reduce the effects of extraneous variables. |
| | D) | all of the above |
|
|
20 | | If, for some reason you cannot use inferential statistics, you may have to |
| | A) | establish reliability through replication. |
| | B) | redo your experiment so that you can use inferential statistics. |
| | C) | simply “eyeball” your results to determine reliability. |
| | D) | ignore reliability issues and interpret your data anyway. |
|