I wrote an earlier post about a particularly good article in the most recent issue of the Journal of Applied Psychology. But what about the other articles? Let's take a look because there is plenty here whether you're interested in cognitive ability tests, personality tests, or meta-analysis.
First, a study for all you meta-analysts by Schmidt and Raju about the best way to combine new research with existing meta-analysis results. Results indicate that the traditional "medical model" of adding new studies to the database and re-calculating worked well, as did an alternative Bayesian model the authors describe.
Next up, a look at what happens to people's scores on cognitive ability tests when they've taken the test before (known as a "practice effect"), by Hausknecht and colleagues. Meta-analyzing the results of 50 studies yielded an adjusted overall effect size of .26. The effects were larger when individuals received coaching and when they re-took the identical test.
Third, a study by Ellingson, Sackett, and Connelly of response distortion on personality tests. Specifically, the authors looked at data from 713 people who had taken the California Psychological Inventory (CPI) twice--once in a selection context (presumably high motivation to "cheat") and once in a development context (presumably low motivation to "cheat). Results? Limited amount of response distortion going on. Good news for personality tests, although certainly not the last word on this topic.
Fourth, a look at when birds of a feather flock together--and when they don't. Specifically, Umphress and colleagues looked at whether demographic similarity attracted prospective employees or not. What they found was that it depended on people's preference for "group based social hierarchies" (i.e., were high on social dominance orientation). If they were, those in "high status groups" were attracted to demographic similarity in an organization while those in "low status groups" were repelled by it. Bottom line? Trying to attract applicants by pointing out similarities with current incumbents may or may not be a good idea...
Next, for you stats folk, a look by Sackett, Lievens, Berry, and Landers at the effect of range restriction on correlations between predictors (e.g., between a personality test and a cognitive ability test). Conclusion? That these correlations can be quite distorted when the predictors are used as a composite in an actual selection setting. Why do we care? Because it may mess up our conclusions about things like incremental validity of one test over another. (A draft of the article goes into more detail)
Last but not least, Kuncel and Klieger look at the very important issue of how knowledge of test score results impacts behavior. Their research revealed a 23% score difference between all individuals who had taken the LSAT and those that took the LSAT and applied to law school. This has implications for range restriction corrections (there's that term again).
There you have it! Quite the smorgasborg this issue.
1 comment:
Thank you! A very helpful review.
Post a Comment