Okay, way past update time. Let's take a look at the latest research (this month's themes: core self evaluations, SJTs, and, of course, personality tests):
Let's start with a couple from the July issue of Journal of Applied Psychology:
- First, an application of signaling theory to selection. The authors point out that viewing selection through this lens, where the focus is on the honesty of communication between applicant and employer, can help shed light on the field and point to future directions.
- Speaking of honesty, the other study is about a proposed way to reduce faking on personality tests. Specifically, the authors looked at the efficacy of providing applicants feedback about their honesty midway through the test; looks like they found mixed results.
Next, let's look at one from the August issue of the Journal of Organizational Behavior:
- The authors studied the impact that psychological capital (e.g., optimism, self-efficacy) has on job search behavior. They found a positive relationship between capital and perceived employability, which itself was related to various good (i.e., problem-focused) and not-so-good (i.e., symptom-focused) coping strategies.
Next, two from the Autumn Personnel Psychology:
- More support for the idea of contextualizing personality inventories. What does that mean? Essentially tailoring the test for work situations, and better yet specific work environments. In this study the mean criterion-related validity jumped from .11 (non-contextualized) to .25 (contextualized).
- Second, what looks like a fascinating study of what factors impact applicant attraction at various stages in the recruitment process. Interestingly, perceived fit was the strongest predictor of attraction but was not a significant predictor of job choice (the strongest predictor was job characteristics). In addition, organizational characteristics and recruitment process characteristics became more important in later stages.
Okay, those were warm-ups. Let's get into the heavy hitter, the September issue of the International Journal of Selection and Assessment:
- First, more on the importance of core self evaluation (CSE). In this study the authors found support for CSE explaining incremental variance in performance over ability and conscientiousness. They propose that CSE does so through its impact on learning motivation.
- Think situational judgment tests can't be coached? Think again.
- Should O*NET information be based on analyst ratings or incumbent ratings? Yes. Looks like each provides value.
- Applicant reactions: always a popular topic. This time the location is Mumbai, India. Not surprisingly, resumes and interviews fared well, while graphology and honesty tests did not. However, in an interesting twist work sample tests were rated unfavorably.
- Do recruiters care about volunteer experience? Not really.
- Might test-takers get fatigued at the end of a long SJT? Yes. Might it impact the psychometric properties? Yes. Might it impact subgroup differences? Umm...sort of.
- How many different ways can you analyze the reliability of an SJT? Turns out, quite a few.
- Aaahhhh yes, emotional intelligence. Haven't heard from you in a while. In this study the authors found positive applicant reactions and incremental validity over ability and the Big 5 across three samples (for the MSCEIT).
- Last but not least, let's end with another personality test--well, integrity test really--the venerable Personnel Reaction Blank. This time the authors looked at cross-cultural generalizability (U.S. and Singapore) as well as gender differences.
That's all for now!
3 comments:
great post. really technical and helpful. thanks for sharing.
Though I did not read the Wilkin & Connelly article, you say, "Do recruiters care about volunteer experience? Not really."
The abstract says, "Our results did not suggest significant differences in the ratings given to paid or volunteer experience."
My read on that statement is that the study found paid and volunteer experience are about equally valued. Can you shed any light on this?
Anonymous:you appear to be correct, I think I read too quickly. Looks like (based on this small sample) they valued paid and volunteer experience equally.
Post a Comment