Imagine the following scenario. A detailed study is performed on a job. Observations of incumbents are made. Discussions are held with subject matter experts. Survey data are collected. The results all indicate that this job requires a high level of intelligence, extraversion and conscientiousness, customer service skill, and a fairly advanced ability to use computers. These attributes are, by far, the most important in predicting success on the job.
Taking all this information, we construct a rigorous assessment process. Candidates spend an entire day being observed as they take well-constructed ability and personality tests, participate in several scenarios that require them to demonstrate how they handle customer service situations, and are asked to produce several products on the computer using a variety of software packages. Raters are experts in the field, and the job, and use objective, behaviorally-anchored rating scales.
The results are then combined for an overall score, based on a formula generated from the initial study of the job. Those with the highest score are hired.
Given all the test scores, what is the maximum percentage of subsequent job performance that we can expect to predict?
Those of you in the field of personnel assessment can see this coming. Those of you that aren't, which option did you select? If you chose (b), move to the head of the class.
Why only 25%? Because tests aren't perfect, and because so much more than individual competencies go into predicting job performance. In fact, 10% is a much more commonly observed statistic.
25% doesn't sound like a lot. Does this mean we shouldn't use tests when hiring? Absolutely not. Because without well-developed assessments, you can expect to predict nothing--zero percent, nada.
In an article in the September 2008 issue of Industrial and Organizational Psychology, Scott Highhouse addresses one result of this low percentage: many hiring managers and HR professionals persist on using their own intuition and judgment using things like informal interviews to supplement (or replace) tests because of a stubborn belief that this will dramatically increase the likelihood of hiring the right candidate. The problem? They're wrong.
The available research makes clear that unstructured, overly subjective techniques such as resume review and informal interviews are simply not very predictive of job performance. So why do people keep using them? Highhouse suggests a key problem is our addiction to our own judgment. Commentators on the article point out other factors, such as lack of feedback, evolution, and different criteria of interest (e.g., hiring managers often care more about getting someone quickly rather than getting the "best" candidate).
In my experience the primary reason why decision makers insist on using less valid assessment methods comes down to ego, in two ways. One, people simply have a hard time admitting that tests do a better job of predicting things than they do. Second, we have an innate need to be involved in decisions impacting our lives. Who amongst us is willing to hire someone, unseen and unheard, based solely on test results? Even the most die-hard assessment fanatics among us have a difficult time with it, even though we know it would probably be for the best!
All in all, a highly recommended and provocative article that gets to one of the biggest challenges in personnel assessment: how assessment professionals and hiring managers can work together to find the right person for the job.