Guess how many articles are in the most recent Journal of Applied Psychology. Go ahead, take a gander.
Try 23. I mean....that's just showing off.
So what's in there about recruitment & assessment? Believe it or not, only two articles. Let's take a look at 'em.
First up, a study by Klehe and Anderson looked at typical versus maximum performance (the subject of the most recent issue of Human Performance) during an Internet search task. Data from 138 participants indicated that motivation to perform well (measured by direction, level, and persistence of effort) rose when people were trying to do their best (maximum performance). But the correlation between motivation and performance diminished under this condition, while the relationship between ability (measured by declarative knowledge and procedural skills) and performance increased.
What the heck does this mean? If you're trying to predict the MAXIMUM someone can do, you're better off using knowledge-based and procedure-based tests. If, on the other hand, you want to know how well they'll perform ON AVERAGE, check out tests that target things like personality, interests, etc.
Second, Lievens and Sackett investigated various aspects of situational judgment tests (SJTs). The authors were looking at factors that could increase reliability when you're creating alternate forms of the same SJT. Using a fairly large sample (3,361) in a "high-stakes context", they found that even small changes in the context of the question resulted in lower consistency between versions. On the other hand, being more stringent in developing alternate forms proved to be of value.
What the heck does this mean? If you're developing alternate forms of SJTs (say, because you give the test a lot and you don't want people seeing the same items over and over) this study suggests you don't get too creative in changing the situations you're asking about.
As usual, the very generous Dr. Lievens has made this article available here. Just make sure to follow fair use standards, folks.