It's almost Thanksgiving here in the U.S., a time to give thanks, and I'd like to thank a largely unsung group of people. Thank you to all the researchers out there who try to help us put some science around the art we call personnel recruitment and selection. Thank you for all your work and insights.
What better way to celebrate this wish of thanks than by talking about a new issue of the International Journal of Selection and Assessment (v16, #4)! As usual it's chalk full of good articles, so let's take a look at some of them.
First, a study of applicant perceptions of credit checks, something many of us do for sensitive positions. Using samples of undergraduates, Kuhn and Nielsen found mostly negative reactions, especially for older participants, but they varied with the explanation given as well as privacy expectations. Worth a look for any of you that conduct large numbers of background checks (and if you do, don't miss the Oppler et al. study below).
Next up, a fascinating study of police officer selection in the Netherlands. Using data from over 3,000 applicants, De Meijer et al. found evidence for differential validity between ethnic majority and minority participants. Specifically, cognitive ability tests predicted training performance for minorities but not for those in the majority. Performance prediction for the latter group was low for cognitive ability tests and somewhat better using non-cognitive ability variables. By the way, the dissertation of the primary author, a fascinating look at similar issues, can be found here.
The third article is one of those articles that almost (...almost) makes me want to pay for it, and anybody interested in electronic applicant issues take note. In this study, Dunleavy et al. used simulations to show the tremendous impact that small numbers of applicants can have on adverse impact (AI) analysis. In fact, the authors reveal situations where AI can be caused or masked by a single applicant applying multiple times! The authors present ways of identifying and handling these cases. Scary stuff. Hope the OFCCP is reading.
Fourth, Lievens and Peeters present results of a study of elaboration and its impact on faking situational judgment tests. Using master students, the researchers found that requiring elaboration on items (i.e., the reason they chose the response) had several positive results. It reduced faking on items with high familiarity. It also reduced the percentage of "fakers" in the top of the distribution. Lastly, candidates reported that the elaboration allowed them to better demonstrate their KSAs. This could be a great strategy for those of you worried about the inflation effects of administering SJTs online.
Next, Furnham et al. with a study of assessment center ratings. The authors found that expert ratings of "personal assertiveness", "toughness and determination", and "curiosity" were significantly correlated with participant personality scores, particularly Extraversion. Correlations with intelligence test scores were low.
Last but definitely not least, Oppler et al. discuss results of a rare empirical study of financial history and its relationship to counterproductive work behaviors (CWBs). Using a "random sample of 2519 employees" the authors found that those with financial history "concerns" were significantly more likely to demonstrate CWBs after hire. Great support for conducting these types of checks.
There are other articles in here, so I encourage you to check them all out. Thank goodness for research!
1 comment:
Something to bear in mind, though, is the probability that financial history (FH) will be prone to adverse impact (just as is the use of credit checks in many populations). I guess this raises the question as to whether the use of FH as a predictor of CWB has less adverse impact than do other assessments methods such as "integrity" tests or other behavioral admission or personality-based assessments (which, interestingly, are typically not prone to AI).
Post a Comment