Wednesday, December 03, 2008

New evidence of the power of GMA


One of the biggest areas of focus for personnel psychologists is uncovering which selection mechanisms do the best job of predicting job performance.

Different researchers have focused on various tests, but perhaps no tests have received as much attention as those that measure general mental ability (GMA). GMA has consistently been shown to produce the highest criterion-related validity (CRV) values and has some very strong proponents. (For those of you not up on your statistics, CRV refers to the statistical relationship between test scores and subsequent job or training performance; with a maximum value of 1.0, the bigger, the better)

One of the most strident advocates of ability testing is Frank Schmidt, who has studied and written extensively on the topic. You may have heard of the widely cited article he co-authored with John Hunter in 1998. In that article, they present a CRV value of .51 for cognitive ability tests, which is considered excellent. Only work samples received a higher score, but this value has been subsequently questioned.

In the latest issue of Personnel Psychology (v61, #4), Schmidt and his colleagues present an updated CRV value, and it's even higher. Using what they claim is a more accurate way of correcting for range restriction, the authors present an overall value of .734 for job performance and .760 for training performance. This value is the highest I've seen reported in a major study such as this and further solidifies GMA as "the construct to beat" when predicting performance.

The article also uses this same updated statistical approach to looking at the CRV of two personality variables that have been generally supported--Conscientiousness (Con) and Emotional Stability (ES). The values presented for these unfortunately were not that much larger than previously reported: for Con: .332 (.367) and for ES: -.100 (-.106) for job (training) performance.

That all being said, there are some things to note:

1) Use of GMA tests for selection are likely to produce substantial adverse impact with most applicant samples of any substantial size, potentially limiting their usage in many cases.

2) CRV coefficients are just one "type" of validity evidence. The calculation is far from perfect and depends greatly on the criterion being used. The authors admit that they were unable to measure the prediction of contextual performance, which could have resulted in substantially higher values for the personality variables.

3) On a related note, some of the largest CRV values for personality tests I've seen were reported in Hogan & Holland (2003), where they aligned predictor and criterion constructs. This study was excluded from the current study because "the performance criteria they employed were specific dimensions of job performance rather than overall job performance."

4) The lower values reported in this study for personality measures may also reflect the way personality is measured, which the authors acknowledge. They suggest using outside raters as well as multiple scales for the same constructs may yield higher CRV values. Interestingly, they also suggest that personality may not be as important because with sufficient GMA, individuals can make up for any weaknesses--such as forcing yourself to frequently speak with others even if you're an introvert.

5) CRV values for GMA continued to vary substantially depending on the complexity of the job, yielding values that ranged .20-.30 apart from one another. This is a key point and is related to the fact that the type of job--and job performance--matters when generating these numbers.

Last but not least, there's another great article in this issue, devoted to (coincidentally) conducting CRV studies by Van Iddekinge and Ployhart--check it out. They go into detail about many issues directly relevant to the study above.

2 comments:

Daniel said...

Thank you so much for the updated meta-analysis on Work Samples. Sometimes I feel I so blindly adhere to the Hunter & Schmidt (1998) findings, thus losing sight of the need to remain a critical consumer of the literature.

BryanB said...

Absolutely. I've considered having a running sidebar on the latest criterion values, but am worried it will be misunderstood.