Sunday, May 15, 2011
IJSA v.19 #2: Personality, personality, personality (and more)
The June 2011 issue of the International Journal of Selection and Assessment (IJSA, volume 19, issue 2) is out. And it's chalk full of articles on personality measurement, but includes other topics as well, so let's jump in! Warning: lots of content ahead.
- O'Brien and LaHuis analyzed applicant and incumbent responses to the 16PF personality inventory and found differential item functioning for over half the items (but of those only 20% were in the hypothesized direction!).
- Reddock, et al. report on an interesting study of personality scores and cognitive ability predicting GPA among students. "At school" frame-of-reference instructions increased validity and, even more interesting, within-person inconsistency on personality dimensions increased validity beyond conscientiousness and ability.
- Fein & Klein introduce a creative approach: using combinations of facets of Five-Factor Model traits to predict outcomes. Specifically, the authors found that a combination (e.g., assertiveness, activity, deliberation) did as well or better in predicting behavioral self-regulation compared to any single facet or trait.
- Think openness to experience is the runt of the FFM? Mussel, et al. would beg to disagree. The authors argue that subdimensions and facets of openness (e.g., curiosity, creativity) are highly relevant for the workplace and understudied--and demonstrate differential criterion-related and construct validity.
- So just when you're thinking to yourself, "hey, I'm liking this subdimension/facet approach), along comes van der Linden, et al. with a study of the so-called General Factor of Personality (GFP) that is proposed to occupy a place at the top of the personality structure hierarchy. The authors studied over 20,000 members of the Netherlands armed forces (fun facts: active force of 61,000, 1.65% of GDP) and found evidence that supports a GFP and value in its measurement (i.e., it predicted dropping out from military training). Unsurprisingly, not everyone is on the GFP bus.
- Next, another fascinating study by Robie et al. on the impact of the economy on incumbent leaders' personality scores. In their sample of US bank employees, as unemployment went up, so did personality inventory results. Faking or environmental impact? Fun coffee break discussion.
- Recruiters, through training and years of experience, are better at judging applicant personality than laypersons, right? Sort of. Mast, et al. found that while recruiters were better at judging the "global personality profile" of videotaped applicants as well as detecting lies, laypeople (students in this case) were better at judging specific personality traits.
- Last one on the personality front: Iliescu, et al. report the results of a study of the Employee Screening Questionnaire (ESQ), a well-known covert, forced-choice integrity measure. Scores showed high criterion-related validity, particularly for counterproductive work behaviors.
- Okay, let's move away from personality testing. Ziegler, et al. present a meta-analysis of predicting training success using g, specific abilities, and interviews. The authors were curious whether the dominant paradigm that g is the single best predictor would hold up in a single sample. Answer? Yep. But specific abilities and structured interviews were valuable additions (unstructured interviews--not so much), and job complexity moderated some of the relationships.
- Given their popularity and long history, it's surprising that there isn't more research on role-players in assessment centers (ACs). Schollaert and Lievens aim to rectify this by investigating the utility of predetermined prompts for role-players during ACs. Turns out there are advantages for measuring certain dimensions (problem solving, interpersonal sensitivity). Sounds promising to me. Fortunately you can read the article here.
- What's the best way to combine assessment scores into an overall profile? Depends who you ask. Diab, et al. gathered information from a sample of adults and found that those in the U.S. preferred holistic over mechanical integration of both interview and other test scores, whereas those outside the U.S. preferred holistic for interview scores only.
- Still with me? Last but not least, re-testing effects are a persistent concern, particularly on knowledge-based tests. Dunlop et al. looked at a sample of firefighter applicants and found the largest practice effects for abstract reasoning and mechanical comprehension (both timed)--although even those were only two-fifths of a standard deviation. Smaller effects were found for a timed test of numerical comprehension ability and an untimed situational judgment test. For all four tests, practice effects diminished to non-significance upon a third session.