p-value - This is a measure of the proportion of examinees who responded to the item correctly. It is also referred to as the item difficulty index.
Paper-and-pencil testing - This refers to the traditional mode of testing in which examinees refer to paper test booklets to read the items and provide their responses using pencils. It is usually contrasted with computer-based testing.
Parallel forms reliability - This method of reliability is used when an exam program has developed multiple, parallel forms of a test. Parallel forms reliability provides an estimate of the similarity that might be expected between an examinee's scores across the separate test forms.
Parallel test forms - This refers to two or more test forms that are developed for a given exam program, according to the same test blueprint and statistical criteria. The forms should be assembled in such a way that they are as similar to one another as possible.
Pass/fail classification - This type of score information indicates whether or not an examinee demonstrated sufficient knowledge of the content and competencies measured by the test. The pass/fail classification decision is the most critical type of score information for criterion-referenced test programs.
Passing score - This refers to the minimum score an examinee must earn in order to pass a test, or to be classified as a master. The passing score for an exam program is determined through a standard setting process. It is also known as the passing point, the cutoff score, or the cut-score.
Percentile rank score - This type of score provides a comparison of an individual examinee's performance to other examinees who took the test. Specifically, an examinee's percentile rank indicates the percentage of other examinees who earned scores below that of the given examinee.
Pilot test - This refers to a preliminary item field test, in which data from a small sample of examinees who respond to the items is collected for analysis and review.
Point-biserial correlation - This is a specific statistical technique often used in testing as a means of computing an item discrimination index.
Policies - The written actions or principles that guide the administration, operation, and decision-making of the certification body. Policies are frequently set by the governing board or other entity with authority over the credentialing program. Policies are implemented by all persons associated with the functions of the certification program to assure fairness and consistency in decisions and practices.
Predictive validity - The predictive validity of a test, like the concurrent validity, is estimated statistically. Predictive validity is specifically concerned with the extent to which a test can predict examinees' future performances as masters or non-masters. It is particularly important for tests used in such applications as selection or admissions.
Pretesting - This refers to an item evaluation process. Specifically, pretesting is used to collect examinee response data for evaluating the performance of new items intended to supplement the item bank in an ongoing exam program.
Procedures - The administrative steps followed to implement and administer policy.
Professional certification - This refers to a non-governmental process for ensuring professional competency. In professional certification, standards and requirements for a profession are established to ensure that individuals awarded certification have met the requisite knowledge, skills, and abilities to perform at the pre-determined level in the profession.
Program audit - The process of conducting an evaluation of an entity, or its individual components, to determine compliance with published standards. This is also referred to as an audit.
Psychometrician - A psychometrician is a professional who works in the field of psychometrics, or measurement. Specifically, psychometrics refers to the measurement of individuals' psychological attributes, including job-related knowledge, skills, and abilities.