BasilTree Consulting What are the benefits of leadership development programs? Skip to content

Psychometric assessments are increasingly prevalent for employee selection and development. However, their use is fraught with technical jargon, which practitioners are often unfamiliar with. Here, we decode some of the most commonly used terms.

Reliability
Reliability of a psychometric instrument is about the precision of measurement, so if we take it repeatedly, it should give the same results. It is a prerequisite of a valid instrument – which means that a test cannot be valid if it is not first reliable, and it has different types.

  • Test-Retest Reliability – Assesses the consistency of test scores when the same test is administered to the same individuals on two or more occasions – i.e. ‘Are scores stable over multiple assessments over time?’
  • Parallel Form Reliability – Assesses the consistency of results between different versions of the same test, designed to measure the same construct – i.e. ‘Do different forms of the test measure the same ability?’
  • Internal Consistency – Assesses extent to which items within a test consistently measure the same construct – i.e. ‘Do different items of the test measure the same ability?’

Validity
Validity is the extent to which an instrument measures what it intends to measure. High validity is the single most important factor to consider when using an aptitude test or personality questionnaire. Instruments typically do and should report multiple forms of validity data.

  • Face Validity – Judges whether a test appears, on the surface, to measure what it aims to measure – i.e. ‘Does it feel right for what it is supposed to measure?’
  • Concurrent or Predictive Validity – Measures extent to which a test accurately predicts or correlates with performance on a related measure, either at the same time (concurrent validity) or in the future (predictive validity) – i.e. ‘Does it link to current or future performance?’
  • Content Validity – Ensures that a test assesses the relevant material or content it intends to measure – i.e. ‘Is the content of the instrument relevant for the job?’
  • Construct Validity – Evaluates whether an assessment measures the theoretical construct it claims to measure – i.e. ‘Does the instrument measure what it is supposed to measure?’

Norms
Norms can be compared to a guidebook that tells a person where they stand compared to others who took the test. A psychometric instrument should be supported by norms for a local, relevant, up-to-date comparison group.

A thorough understanding of these concepts is essential for conducting psychometric assessments with accuracy and integrity in any talent management process.

Decoding the Technical Jargon used in Psychometric Assessments