Site MapHelpFeedbackMain Points
Main Points
(See related pages)

Validity

  • It is important for researchers to use valid instruments for the conclusions they draw are based on the information they obtain using these instruments.
  • The term "validity," as used in research, refers to the appropriateness, meaningfulness, correctness, and usefulness of any inferences a researcher draws based on data obtained through the use of an instrument.
  • Content-related evidence of validity refers to judgments on the content and logical structure of an instrument as it is to be used in a particular study.
  • Criterion-related evidence of validity refers to the degree to which information provided by an instrument agrees with information obtained on other, independent instruments.
  • A criterion is a standard for judging; with reference to validity, it is a second instrument against which scores on an instrument can be checked.
  • Construct-related evidence of validity refers to the degree to which the totality of evidence obtained is consistent with theoretical expectations.
  • A validity coefficient is a numerical index representing the degree of correspondence between scores on an instrument and a criterion measure.
  • An expectancy table is a two-way chart used to evaluate criterion-related evidence of validity.

Reliability

  • The term "reliability," as used in research, refers to the consistency of scores or answers provided by an instrument.
  • Errors of measurement refer to variations in scores obtained by the same individuals on the same instrument.
  • The test-retest method of estimating reliability involves administering the same instrument twice to the same group of individuals after a certain time interval has elapsed.
  • The equivalent-forms method of estimating reliability involves administering two different, but equivalent, forms of an instrument to the same group of individuals at the same time.
  • The internal-consistency method of estimating reliability involves comparing responses to different sets of items that are part of an instrument.
  • Scoring agreement requires a demonstration that independent scorers can achieve satisfactory agreement in their scoring.







Design and Evaluate ResearchOnline Learning Center with Powerweb

Home > Chapter 8 > Main Points