Monday, January 18, 2010
How to get r = 1.0
Recruiters have a variety of measures of their success, often including process outcomes (time-to-fill, number of requisitions filled, etc.).
And although assessment professionals have a variety of success measures, some in common with recruiters (e.g., tenure), there is one measure that stands above all others: job performance.
The "gold standard" of this measurement is to correlate test scores with job performance measures (called criterion-related validation evidence). A correlation of, say, .50 between these two, is considered outstanding. Square that and you have the percentage of behavior explained. So in other words, when we can explain 25% of job performance with assessments, we call that success (and with good reason, because it's a heck of a lot better than 0%).
Why not higher than 25%? What would it take to get r =1.0, in other words a perfect correlation between test scores and performance? Here is a somewhat tongue-in-cheek recipe for achieving this impossible dream:
1. An accurate identification of the top competencies/KSAs required for the job. Qualified subject matter experts reach consensus on a handful of far and away the most important qualities that impact job performance.
2. Perfectly constructed and administered, perfectly reliable and accurate measures of the the top KSAs.
3. Variability among applicants in terms of amount of the relevant KSAs possessed.
4. Test scores combined and weighted appropriately given the job analysis results.
5. Variability in scores for those hired.
6. A clear description of the work to be performed and competencies to be demonstrated so the individuals understand expectations.
7. Perfectly reliable, accurate measures of job performance that capture behaviors one would logically relate to the critical KSAs.
8. A supportive work environment (e.g., high quality supervision, adequate resources) so this doesn't interfere with work performance.
9. Variability in job performance among those hired using the assessments.
10. Elimination of outside factors that may contribute to lower job performance (e.g., family emergencies, medical/psychological changes).
As you can see, some of these are achievable (1, 4, 6), others are challenging and depend on circumstances, but are not impossible to achieve (3, 5, 8, 9) and some are practically impossible (2, 7, 10). I said earlier this was tongue-in-cheek because obviously we'll never have a situation where all of these conditions (as well as ones I'm sure I forgot) are true.
Does this mean we should abandon the correlation between test score(s) and job performance? Absolutely not. It should continue to be one of our "gold standards" for measuring our success as assessment professionals. But we--and our customers--should have our eyes wide open before pressing "compute."