Saturday, August 18, 2007

September 2007 issue of IJSA

The September, 2007 issue (vol. 15, #3) of the International Journal of Selection and Assessment is out, with the usual cornucopia of good reading for us, particularly if you're into rating formats and personality assessment. Let's skim the highlights...

First, Dave Bartram presents a study of forced choice v. rating scales in performance ratings. No, not as predictors--as the criterion of interest. Using a meta-analytic database he found that prediction of supervisor ratings of competencies increased 50% when using forced choice--from a correlation of .25 to .38. That's nothing to sneeze at. Round one for forced choice scales--but see Roch et al.'s study below...

Next up, Gamliel and Cahan take a look at group differences with cognitive ability measures v. performance measures (e.g., supervisory ratings). Using recent meta-analytic findings, the authors find group differences to be much higher on cognitive ability measures than on ratings of performance. The authors suggest this may be due to the test being more objective and standardized, which I'm not sure I buy (not that they asked me). Not super surprising findings here, but it does reinforce the idea that we need to pay attention to group differences for both the test we're using and how we're measuring job performance.

Third, Konig et al. set out to learn more about whether candidates can identify what they are being tested on. Using data from 95 participants who took both an assessment center and a structured interview, the authors found results consistent with previous research--namely, someone's ability to determine what they're being tested on contributes to their performance on the test. Moreover, it's not just someone's cognitive ability (which they controlled for). So what is going on? Perhaps it's job knowledge?

Roch et al. analyzed data from 601 participants and found that absolute performance rating scales were perceived as more fair than relative formats. Not only that, but fairness perceptions varied among each of the two types. In addition, rating format influenced ratings of procedural justice. The researchers focus on implications for performance appraisals, but we know how important procedural justice is for applicants too.

Okay, now on to the section on personality testing. First up, a study by Carless et al. of criterion-related validity of PDI's employment inventory (EI), a popular measure of reliability/conscientiousness. Participants included over 300 blue-collar workers in Australia. Results? A mixed bag. EI performance scores were "reasonable" predictors of some supervisory ratings but turnover scores were "weakly related" to turnover intentions and actual turnover. (Side note: I'm not sure, but I think the EI is now purchased through "getting bigger all the time" PreVisor. I'm a little fuzzy on that point. What I do know is you can get a great, if a few years old, review of it for $15 here).

Next, Byrne et al. present a study of the Emotional Competence Inventory (ECI), an instrument designed to measure emotional intelligence. Data from over 300 students from three universities showed no relationship between ECI scores and academic performance or general mental ability. ECI scores did have small but significant correlations (generally in the low .20s) with a variety of criteria. However, relationships with all but one of the criteria (coworkers' ratings of managerial skill) disappeared after controlling for age and personality (as measured by the NEO-FFI). On the plus side, the factor structure of the ECI appeared distinct from the personality measure. More details on the study here.

Last but not least, Viswesvaran, Deller, and Ones summarize some of the major issues presented in this special section on personality and offer some ideas for future research.

Whew!

No comments: