Facebook Pixel
Brookbush Institute Logo
Reliability Thumbnail

Glossary Term

Reliability

Reliability refers to the ability of a test or assessment to produce consistently accurate results, time after time, regardless of who is performing the assessment. If an assessment cannot produce consistent results, then a professional cannot determine the accuracy of a measurement, compare that measurement to normative data, or reassess and compare measurements taken on two separate dates. The concept behind reliability is fairly simple and falls into two broad categories. Intra-tester reliability - assesses the agreement (or lack of) between test scores from one test administration to the next (administered by a single tester). Inter-tester reliability - assesses the agreement (or lack of) between two or more testers in their assessment.

Reliability: refers to the ability of a test or assessment to produce consistently accurate results, time-after-time, regardless of who is performing the assessment. If an assessment cannot produce consistent results, than a professional cannot determine the accuracy of a measurement, compare that measurement to normative data, or reassess and compare measurements taken on two separate dates. Although the psychometrics and statistics used to determine whether a test is truly reliable are a bit complex, the concept behind reliability is fairly simple and falls into two broad categories.

  • Intra-tester reliability - assesses the agreement (or lack of) between test scores from one test administration to the next (administered by a single tester).
  • Inter-tester reliability - assesses the agreement (or lack of) between two or more testers in their assessment.

Comments

Guest