512.835.1343 | info@pivotu.com

‹ All Blog Posts

Why I choose HRCI – Part Two

Alice Dendinger | October 7, 2015 | Featured, Performance Management, Talent Management

This is Part Two in a series of posts on “Why I choose HRCI.”  You can read my first post here.

Note from Alice Dendinger, SPHR: Please read this entire document – “hang in there” as it gets very technical. Also note that this document was prepared by Professional Examination Service, which has been the HRCI testing partner since 2000. Also note that the term “entry-level performance” in the document is specific to the particular exam – “entry-level performance” as an SPHR would differ from “entry-level performance” as a PHR.  An equivalent term would be an individual who is “justly qualified” for the role.


 

In October 2014, I asked Mr. Hank Jackson, CEO of SHRM about the validity of the SHRM exam, he replied that it was not a big concern for him. “Of course the new SHRM exam will be certified” – I think he meant validated. Mr. Hank Jackson reported that the testing partner will be the Airforce. Holmes Corporation is their learning partner.

This is information about the validity of the HRCI exam:

Examination validity is a concept that refers to how well an examination measures what it is designed to measure. HR Certification Institute examinations meet the standards for content validity — the behavior domains have been systematically analyzed to assure that all major aspects of the practice are covered by examination items in the correct proportion.

The primary goal of the HR Certification Institute examinations is to validate mastery in the field of human resource management and to promote organizational effectiveness. The relevance, in terms of importance and criticality of the content areas assessed by the HR Certification Institute examinations, to practice at the entry-level is supported by the findings of practice analysis studies. Practice analysis studies ensure that questions and problems on exams are linked to real world professional responsibilities and tasks. Based on focus group discussions and on the validation data from a large survey sample, examination specifications are developed using a formula that allocates items to functional areas, including task and KSA statements on the basis of frequency and criticality ratings. These examination specifications become the Body of Knowledge, used as the basis for writing questions for the exam. This Body of Knowledge provides a numeric identifier to each functional area, task and KSA. This is called a rubric.

Question writers are content experts selected to represent the various subject matter areas covered in the Body of Knowledge. They receive training on how to write questions for examinations and get feedback on their work. Each question written goes through several levels of review and revision before it is accepted for inclusion in the item bank. To ensure that the examination questions reflect the functional areas presented in the Body of Knowledge, each question is classified by content experts linking it to the numeric identifier or rubric in the Body of Knowledge. In addition all items are validated to confirm that they meet minimum standards of importance, criticality to entry-level work and that they are free from stereotyping and bias.

Examinations are then developed from this item bank to meet the proportional requirements of the examination specifications, assuring that knowledge is sampled to fairly reflect the actual practice of the profession. Exams are then reviewed by a committee of content experts, who analyze each question for:

  • Proper coding to the Body of Knowledge
  • Overlap and cueing with other questions on the exam
  • Currency
  • Applicability across jurisdictions and industries
  • Answer choices; is there only one clearly correct or best answer
  • Bias and stereotyping

Revised examinations then undergo a final content expert review before publication.

The procedures used to prepare examinations are consistent with the technical guidelines recommended by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education (AERA, APA, NCME; 1999), and they adhere to relevant sections of the Uniform Guidelines on Employee Selection adopted by the Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, and Department of Justice (EEOC, CSC, DOL,DOJ; 1978).

HR Certification Institute examinations are then administered to candidates on a computer-based testing platform. All candidates take examinations under standardized conditions, so that each one has the same opportunity for a successful result. Scores results from each examination undergo several quality control checks to assure accuracy.

Passing points for the PHR and SPHR examination were set via a Standard Setting process immediately after the conclusion of the last practice analysis. For this standard setting, content experts used industry best practices to established difficulty of each item on the initial forms. Subsequent tests have been equated back to these forms via Item Response Theory – a very precise and modern equating method. Forms are equated because although different forms for a given test are built to have similar psychometric qualities, they cannot be expected to be precisely equivalent in level and range of difficulty of the unique set of test questions comprising them. If candidates take one form of a test that is more difficult than a previous version, they would be at a disadvantage in comparison with previous test candidates. Equating methods measure the difficulty of each form and adjust the passing score as needed so the same level of candidate performance is reflected in the passing score regardless of form difficulty.

As the candidate groups for the GPHR and SPHR-CA and PHR-CA are too small for valid and precise equating, a standard setting study is done for each new form.

All HR Certification Institute examinations show high reliability. The Kuder-Richardson Formula #20 (KR 20) reliability estimate and the split‑half reliability estimate yield evidence regarding the internal consistency of the tests. Internal consistency refers to the degree of homogeneity among test items and provides an index of the consistency with which a test measures a common attribute of candidates. If a test is reliable, candidates who respond correctly to one set of items or problems will respond correctly to other equivalent sets of test items. Both reliability estimates have a range of 0 to 1.00, and a large reliability coefficient suggests that the items in an examination measure homogeneous content areas. The magnitude of the reliability estimates for these test forms suggests that they assess a homogeneous behavior domain -‑ that is, competence within the field. As a general rule, reliability coefficients of .80 or higher are recommended for credentialing programs. The reliability estimates observed for these test forms exceed this criterion by a wide margin. For PHR and SPHR examinations, reliabilities exceed .90; for GPHR examinations, reliabilities range between .86 and .89; SPHR-CA and PHR-CA reliabilities are somewhat lower, generally between .78 and .80. The lower reliabilities for the SPHR-CA and PHR-CA forms originate from the range of items on the exam, relatively small numbers of candidates testing, and test length.

The standard errors of estimate based on the KR 20 and the split‑half reliability coefficients provide an estimate of the average amount of error associated with a test score. A large standard error indicates that a significant amount of error exists in the test score. The standard errors of estimate for these test forms are small – ranging around 6 points for PHR/SPHR, 5.25 points for GPHR and 4 points for SPHR-CA and PHR-CA. These small standard errors suggest that the test is a very precise measure of professional competence in the field.

The examinations also show consistency in scoring. Within the test level, the tests are comparable with respect to typical performance and the dispersion of test scores about the average scores.

Overall, the test construction methods and statistical results support the inference that the tests achieve their stated objective: to certify those who meet the standard that is relevant to entry-level performance.

In conversation with Mr. Hank Jackson in October 2014, I asked him about validity and informed him that the HRCI exam is validated and revalidated each time someone completes the exam. On each HRCI exam, there are 25 questions that are asked but not scored. This is for the purpose of “testing the test questions” to ensure their validity and a possible scored question for future tests. When I explained this to Mr. Jackson, his reply was, “That is not true, where did you hear that.” I replied that for years this was explained in the first 12 pages of the SHRM Learning Systems first study book focused on Strategic Management.

The question to ask SHRM about the SHRM-CP and SHRM-SCP is about its validity in the community of organizations that scrutinize tests for certification purposes. To my knowledge the SHRM exams are not validated for certification nor have they the rich history of validation currently held by HRCI.  When I told  Mr. Jackson my concerns about the SHRM-CP and SHRM-SCP validation during our October 2014 conversation, he said that he considers anyone worried about SHRM test certification to be foolish.

Back to Top