{"id":27,"date":"2015-04-08T18:00:00","date_gmt":"2015-04-08T18:00:00","guid":{"rendered":"http:\/\/proftesting.com\/blog\/?p=27"},"modified":"2016-01-29T14:24:11","modified_gmt":"2016-01-29T14:24:11","slug":"201547the-role-error-plays-in-candidates-scores","status":"publish","type":"post","link":"https:\/\/www.proftesting.com\/blog\/2015\/04\/08\/201547the-role-error-plays-in-candidates-scores\/","title":{"rendered":"The Role Error Plays in Candidates&#8217; Scores"},"content":{"rendered":"<p>A candidate\u2019s observed score can be broken down into two components: their true score and an error component. The error component of observed scores can be further split into two types, random and systematic. \u00a0Random errors of measurement affect candidates\u2019 scores purely by chance, such as the room temperature where a candidate is testing, the candidate\u2019s anxiety level or misreading a question. However, systematic errors of measurement are factors that consistently impact a candidate\u2019s scores. \u00a0For example, when measuring a candidate\u2019s math skill level through word problems, the candidate\u2019s reading level could have an impact on their scores. \u00a0If the same test on math ability was administered over and over again to the same candidate under the same conditions this error would continue to be exhibited.<\/p>\n<p>Both random and systematic errors play a role in score interpretations. Random errors impact the consistency of scores and therefore the reliability of scores. As measurement professionals, it is our responsibility to provide candidates with as much information as possible to correctly interpret their scores. \u00a0Taking it a step further by reporting reliability of scores to candidates helps them gain confidence in the accuracy of their scores. \u00a0More specifically, the higher the reliability of a candidate\u2019s scores the more confident one can be that the observed score accurately reflects the true score.<\/p>\n<p>On the other hand, systematic errors do not impact the consistency of scores (i.e., the reliability of the scores).\u00a0 To expand on an earlier example, reading ability in math problems can consistently impact a candidate\u2019s score.\u00a0 This presents a larger issue in that we can consistently measure the wrong thing.\u00a0 Perhaps we think we are measuring a candidate\u2019s math ability but in reality that measurement is being threatened by the candidate\u2019s reading ability. We call this a construct validity issue.<\/p>\n<p>Error clearly plays a role in candidates\u2019 scores and as measurement professionals it is our responsibility to design examinations that minimize error. We all know that no examination is perfect.\u00a0 However, when we are aware of the different types of error and the impacts these errors have on observed scores we can advise others on how to avoid common mistakes.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A candidate\u2019s observed score can be broken down into two components: their true score and an error component. The error component of observed scores can be further split into two types, random and systematic. &nbsp;Random errors of measurement affect candidates\u2019 scores purely by chance, such as the room temperature where a candidate is testing, the candidate\u2019s anxiety level or misreading a question. However, systematic errors of measurement are factors that consistently impact a candidate\u2019s scores. &nbsp;For example, when measuring a candidate\u2019s math skill level through word problems, the candidate\u2019s reading level could have an impact on their scores. &nbsp;If the same test on math ability was administered over and over again to the same candidate under the same conditions this error would continue to be exhibited.&nbsp;<\/p>","protected":false},"author":10,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-27","post","type-post","status-publish","format-standard","hentry","category-industry-news"],"_links":{"self":[{"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/posts\/27","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/comments?post=27"}],"version-history":[{"count":1,"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/posts\/27\/revisions"}],"predecessor-version":[{"id":141,"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/posts\/27\/revisions\/141"}],"wp:attachment":[{"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/media?parent=27"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/categories?post=27"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.proftesting.com\/blog\/wp-json\/wp\/v2\/tags?post=27"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}