Monday, June 04, 2007

Summer 2007 Personnel Psychology + free content

The Summer 2007 issue of Personnel Psychology (v. 60, #2) is here and it's got some good stuff, so let's jump right in!

First off is the aptly titled, A review of recent developments in integrity test research by Berry, Sackett, and Wiemann, the fifth in a series of articles on the topic. This is an extensive review of research on integrity tests since the last review, which was done in 1996. There's a lot here, so I'll just hit some of the many highlights:

- It appears that integrity tests can vary in their cognitive load depending on which facets are emphasized in the overall score.

- It is likely that aspects of the situation impact test scores (in addition to individual differences); more research is needed in this area.

- Although there have been no significant legal developments in this area since the last review, concerns have been raised over integrity tests being used to identify mental disorders. The authors do not seemed concerned, as these tests (e.g., Reid Report, Employee Reliability Index) were not designed for that purpose thus likely do not violate EEOC Guidelines.

- Research on subgroup scores (e.g., Ones & Viswesvaran, 1998) indicate no substantial differences on overt integrity tests; no research has addressed personality-based tests.

- Test-takers do not seem to have particularly positive reactions to integrity tests, although this appears to depend upon the type of test, items on the test, and response format.

Next, Raymond, Neustel, and Anderson investigate certification exams and whether re-taking the same exam or a parallel form results in different score increases. Using a sample of examinees taking ARRT certification exams in computed tomography (N=79) and radiography (N=765), the authors found no significant difference in score gains between the two types of tests, suggesting exam administrators may wish to re-think the importance of alternate forms for certification, particularly given the cost of development (estimated by the authors at between $50K and $150K). The authors do point out that the generalizability of these results is likely limited by test type and examinee characteristics.

Third, Henderson, Berry, and Matic investigate the usefulness of strength and endurance measures for predicting firefighter performance on physically demanding suppression and rescue tasks. Using a sample of 287 male and 19 female fire recruits hired by the city of Milwaukee, the authors found that both measures (particularly strength measures such as lat pull-down and bench press) predicted a variety of criteria, including a roof ladder placement exercise, axe chopping, and "combat" test. The authors suggest continued gathering of data to support the use of these types of tests (while acknowledging the ever-present gender differences), and discuss several problems with simulated suppression and rescue tasks, now used by many municipalities in light of previous legal challenges to pure strength and endurance measures.

Lastly, LeBreton, et al. discuss an alternate way of demonstrating the value of variables in I/O research. Traditionally researchers have focused on incremental validity, essentially the amount of "usefulness" that a variable adds to other variables already in the equation. (Allows you to do things like determine if a personality test would help you predict job performance above and beyond the test(s) you already use.) Instead, the authors present the idea of relative importance, which shifts the focus to the importance of each variable in the equation. Fascinating stuff (and far more than I can describe here), and something I'd like to see more of. I believe the authors are correct in stating it would be much easier to talk to managers about how useful each test in a battery is rather than the fact that overall they predict 35% of performance. The article includes a fascinating re-analysis of Mount, Witt, & Barricks' 2000 study of the use of biodata with clerical staff.


This issue also includes reviews of several books, including the third edition of Ployhart, Schneider and Schmitt's Staffing Organizations (conclusion: good but not great), Weekley and Ployhart's Situational Judgment Tests (conclusion: good as long as you already know what you're doing), and Griffith and Peterson's A Closer Examination of Applicant Faking Behavior (conclusion: good for researchers, not so good for managers).


But wait, there's more...the Spring 2007 issue, which had some interesting stuff as well, is free right now! So get those articles while you can. Hey, it's worth surfing over there just for McDaniel et al.'s meta-analysis of situational judgment tests!

No comments: