Thursday, January 10, 2008

December '07 IJSA, Part 2

In a previous post I described one of the most interesting articles in the December '07 issue of IJSA (the International Journal of Selection and Assessment (IJSA)). In this post I'll go over the rest of the issue.

First up, Carless & Imber with a study of interviewer characteristics. Using a sample of 450 graduate students, they found that personal aspects of the interviewer, such as warmth, friendliness, and humor, as well as job-related aspects such as job knowledge and general competence, had effects on how attracted the person was to the job as well as their likely job choice intentions. Not only that, aspects of the interviewer impacted how anxious the interviewee was. Lesson: Much like recruiters, make sure people doing the interviewing are trained and the type of people you'd like representing your organization!

Next, LaHuis, MacLane, and Schlessman with a study of reapplication among 542 applicants to a U.S. government position. Focusing on the 9% that didn't get the job but reapplied the following year, the authors found that "opportunity to perform" played a significant role in reapplication behavior. Lesson: If you want people to reapply, give them the chance to show their strengths.

Third, Carless and Wintle with a study of what attracts applicants to particular job opportunities--specifically looking at flexible career paths and who's doing the recruiting. Participants were 201 "young, inexperienced job seekers" who completed a questionnaire. Results? Flexible career paths were a big attraction (compared to traditional career paths) but recruiter background (HR or external agency) made no difference, in line with previous research that's found that recruiter personality is the key rather than things like background or demographics.

Next, Hermelin, Lievens, and Robertson conducted a meta-analysis of assessment center scores. Based on 26 studies (N=5,850) the authors found a corrected correlation of .28 between AC scores and subsequent supervisory ratings, which they hypothesize is lower than the true value due to range restriction of assessment center scores (and lower than the corrected value of .37 that Schmidt & Hunter reported). Alternate version (if the link gets fixed) here.

Heinsman, et al. provide the next study, which looks at how psychologists view the makeup and measure of competencies, still a hot topic in HR circles. Using data from over 900 applicants who participated in a 1-day selection process, they examined the relationship between competencies including Thinking, Feeling, and Power with traditional measures of cognitive ability, personality, and assessment center performance. Results? To assess Thinking, psychologists in this study relied upon measures of cognitive ability (makes sense!). To assess Feeling, they used interview simulation scores as well as personality tests scores. Finally, when analyzing Power, they relied mostly on personality scores.

Last but not least, Lievens and Anseel with a fascinating study of creating alternate versions of computerized in-baskets. The alternate version used a very similar situation that people had to respond to, but altered the context for 10 of the 20 items. Results? No significant difference in overall in-basket score between the two forms. So if you're looking to duplicate your in-baskets, check this out! Oh, and there's an alternate version of the article here.

Okay, I might as well tell you about the other two articles, because you might be interested. One is on self and peer ratings and one is on survey non-respondents (hint: it's not the star performers that aren't responding). Both have interesting results, so check 'em out!

No comments: