Professional Testing, Inc.
Providing High Quality Examination Programs

From the Item Bank

The Professional Testing Blog

 

The Role Error Plays in Candidates’ Scores

April 8, 2015  | By  | 

A candidate’s observed score can be broken down into two components: their true score and an error component. The error component of observed scores can be further split into two types, random and systematic.  Random errors of measurement affect candidates’ scores purely by chance, such as the room temperature where a candidate is testing, the candidate’s anxiety level or misreading a question. However, systematic errors of measurement are factors that consistently impact a candidate’s scores.  For example, when measuring a candidate’s math skill level through word problems, the candidate’s reading level could have an impact on their scores.  If the same test on math ability was administered over and over again to the same candidate under the same conditions this error would continue to be exhibited.

Both random and systematic errors play a role in score interpretations. Random errors impact the consistency of scores and therefore the reliability of scores. As measurement professionals, it is our responsibility to provide candidates with as much information as possible to correctly interpret their scores.  Taking it a step further by reporting reliability of scores to candidates helps them gain confidence in the accuracy of their scores.  More specifically, the higher the reliability of a candidate’s scores the more confident one can be that the observed score accurately reflects the true score.

On the other hand, systematic errors do not impact the consistency of scores (i.e., the reliability of the scores).  To expand on an earlier example, reading ability in math problems can consistently impact a candidate’s score.  This presents a larger issue in that we can consistently measure the wrong thing.  Perhaps we think we are measuring a candidate’s math ability but in reality that measurement is being threatened by the candidate’s reading ability. We call this a construct validity issue.

Error clearly plays a role in candidates’ scores and as measurement professionals it is our responsibility to design examinations that minimize error. We all know that no examination is perfect.  However, when we are aware of the different types of error and the impacts these errors have on observed scores we can advise others on how to avoid common mistakes.

Categorized in:

Comments are closed here.