Monday, December 03, 2007

Winter '07 Personnel Psychology

Things are starting to heat up in the journal Personnel Psychology. The shot across the bow of personality testing that happened in the last issue of Personnel Psychology turns into a full-blown brawl in this issue. But first, let's not forget another article worth out attention...

First up, Berry, Sackett, and Landers revisit the issue of the correlation between interview and cognitive ability scores. Previous meta-analyses have found this value to be somewhere between .30 and .40. Using an updated data set, excluding samples in which interviewers likely had access to ability scores, and more accurately calculating range restriction, the authors calculate a corrected r of .29 based on the entire applicant pool. This correlation is even smaller when interview structure is high, when the interview is behavioral description rather than situational or composite, and job complexity is high. Why is this important? Because it impacts what other tests you might want to use--the authors point out that using their updated numbers they obtained a multiple correlation of .66 for a high structure interview combined with a cognitive ability test (using Schmidt & Hunters' methods and numbers). Pretty darn impressive.


Now that we have that under our belt, ready for the main event? As I said, in last issue Morgeson et al. came out quite strongly against the use of self-report personality tests in selection contexts--primarily because they claim the uncorrected criterion-related validity coefficients are so small. So it's not surprising that this edition contains two articles by personality researcher heavyweights defending their turf...

First, Tett & Christiansen raise several points; more than I have space for here. Some points include: considering conditions under which personality tests are used and validity coefficients aggregated; that there are occupational differences to consider; that coefficients found so far aren't as high as they could be if we used more sophisticated approaches like personality-oriented job analysis; and that coefficients increase when multiple trait measures are used. This sums their points up nicely: "Overall mean validity ignores situational specificity and can seriously underestimate validity possible under theoretically and professional prescribed conditions."

Second, Ones, Dilchert, Viswesvaran, and Judge come out swinging and make several arguments, including: conclusions should be based on corrected coefficients; coefficients are on par with other frequently used predictors, some of which are much more costly to develop (e.g., assessment centers, biodata); different combinations of Big 5 factors are optimal depending upon the occupation; and compound personality variables should be considered (e.g., integrity). Suggestions include developing more other-ratings instruments and investigating non-linear effects (hallelujah), dark side traits, and interactions. They sum up: "Any selection decision that does not take the key personality characteristics of job applicants into account would be deficient."

Not to be out-pulpited (yes, you can use that phrase), Morgeson et al. come back with a response to the above two articles, reiterating how correct they were the first time around. They state that much of what the authors of the above articles wrote was "tangential, if not irrelevant", that with respect to the ideas for increasing coefficients, "the cumulative data on these 'improvements' is not great", and that corrected Rs presented by Ones et al. aren't impressive when compared to other predictors. They point out some flaws of personality tests (applicants can find them confusing and offensive) but fail to mention that ability tests aren't everyone's favorite test either. They claim that job performance is the primary criterion we should be interested in (which IMHO is a bit short-sighted), and that corrections of coefficients are controversial.

So where are we? Honestly I think these fine folks are talking past each other in some respects. Some issues (e.g., adverse impact) don't even come up, while other issues (e.g., faking) are given way too much attention. It's difficult to compare the arguments side by side because each article is organized differently. It doesn't help that the people on both sides are some of the researchers with the most invested (and most to lose) by arguing their particular side.

I'm thinking what's needed here is an outside perspective. Here's my two cents: this isn't an easy issue. Criteria are never "objective." Job performance is not a singular construct. Job complexity has a huge impact on the appropriate selection device(s). And organizations are, frankly, not using cognitive or ability tests nearly as much as they are conducting interviews. So let's stop focusing on which type of test is "better" than the others. Frankly, that's cognitive laziness.

So is this just sound and fury, signifying nothing? No, because people are interested in personality testing. Hiring supervisors are convinced that it takes more than raw ability to do a job. We shouldn't ignore the issue. Instead we should be focusing on providing sound advice for practitioners and treating other researchers with respect and attention.

Should you use personality tests? I'll answer that question with more questions: what does the job analysis say? What does your applicant pool look like? What are your resources like? It's not something you want to use cookie-cutter, but not something you should write off completely.

Okay, I'm off my soap box. Last but not least there are some good book reviews in this issue. One is Bob Hogan's book (which I enjoyed immensely and actually finished which is rare for me), Personality and the Fate of Organizations, which the reviewer recommends; another is Alternative Validation Strategies, which the reviewer highly recommends; and the third for us is Recruiting, Interviewing, Selecting, & Orienting New Employees by Diane Arthur, which the reviewer...well...sort of recommends--for broad HR practitioners.

That's all folks!

1 comment:

Robin said...

When you said, "I'm thinking what's needed here is an outside perspective. Here's my two cents: this isn't an easy issue. Criteria are never "objective." Job performance is not a singular construct." you erred.

Job selection criteria are often objective. Take a look at my government job and selection criteria ebooks to learn all about writing responses to selection criteria at http://www.dwave.com.au or http://www.winagovtjob.com for more information.

Robinoz