Sunday, July 26, 2009

July 2009 J.A.P.: SJTs and more


Situational judgment tests (SJTs) have a long tradition of successfully being used in employment tests. These types of (typically multiple-choice) items describe a job-related scenario then ask the test-taker to endorse the proper response. The question itself usually takes one of two forms:

1) What SHOULD be done in this situation? ("knowledge instruction")

2) What WOULD you do in this situation? ("behavioral tendency instruction")

What are the practical differences between the two? Previous meta-analytic research, specifically McDaniel et al.'s 2007 study, revealed that knowledge instruction items tend to be more highly correlated with cognitive ability, while behavioral tendency items show higher correlations with personality constructs. In terms of criterion-related validity, there appeared to be no significant difference between the two.

But there were limitations to that study, and two of them are addressed in a study found in the July 2009 issue of the Journal of Applied Psychology. Specifically, Lievens et al. addressed the inconsistency in stem content by keeping it the same while altering the response instruction, and also looked at a large population of applicants, rather than incumbents, which tended to dominate McDaniel et al.'s 2007 sample.

Results? Consistent with the 2007 study, knowledge instructions were again more highly correlated with cognitive ability, and there was no meaningful difference in criterion-related validity (the criterion being grades in interpersonally-oriented courses in medical school). Contrary to some research in low-stakes settings, there were no mean score difference between the two response instructions.

Practical implications? The authors suggest knowledge instruction items may be superior due to their resistance to faking. My only concern is that these items are likely to result in adverse impact in many applied settings. Like all assessment situations, the decision will involve a variety of factors, including the KSAs required on the job, the size and nature of the applicant pool, the legal environment, etc. But at least this type of research supports the fact that both response instructions seem to WORK. By the way, you can see an in-press version of this article here.

Other content in this journal? There's quite a bit, but here's a sample:

Content validity <> criterion-related validity

More evidence that selection procedures can impact unit as well as organizational performance

Self-ratings appear to be culturally bound

1 comment:

jonathan said...

"What would you do if you saw a £/$ 20 note lying in the street?"

All candidates would say they would pick it up and hand it to a policeman but isn't that what they want you to think they would do?

Most people would probably think
"wooo hooo" and go straight off and spend it...