It's journal season--this time let's take a look at the Spring 2011 issue of Personnel Psychology:
First up, Derue, et al. with an important meta-analysis on leadership effectiveness. After looking at 59 studies, they found that leader traits and behaviors explained a minimum of 31% of leadership effectiveness. Interestingly, group performance (which I would argue is the most important criterion) was the most difficult to predict. Behaviors tended to explain more than traits, but the authors suggest a model where behavior mediates the relationship between traits and effectiveness is warranted. Not surprisingly, the best trait predictor depended on the criterion: conscientiousness predicted leader effectiveness, group performance, and follower job satisfaction the best, while satisfaction with leader was best predicted by leader agreeableness (reminds me of a recent IJSA study). The same was true with leader behaviors, although consideration was a good predictor across the criteria. A must for anyone interested in leadership research, and you can read an in-press version here.
Next, a piece by Melchers, et al. on whether more interview structure really leads to better rating quality (I'll ruin it for you: yes). Specifically, using a sample of primarily undergraduates, the authors found that providing subjects with frame-of-reference (FOR) training and descriptively anchored rating scales led to substantial increases in rater accuracy and interrater reliability. You can read an in-press version here.
Those of you interested in the concept of core self-evaluations will want to read the study by Ferris, et al.
One area that we don't see enough research in is newcomer adaptation. The longitudinal study by Wang, et al. of a group of Chinese subjects helps fill that hole by exploring the relationship between adaptability, person-environment fit, and work-related outcomes.
Last but most certainly not least, Campion, et al. provide us with a review of best practices in competency modeling. Specifically, 20 of them. Of particular note to you may be that they distinguish competency modeling from job analysis. Want to read the whole thing? Me too! Wish I could have found an in-press version but no such luck.
Sidenote: while not specifically related to recruitment or selection, Christian, et al.'s piece on engagement may be of interest to several readers.
Celebrating 10 years of the science and practice of matching employer needs with individual talent.
Sunday, February 27, 2011
Tuesday, February 22, 2011
"Grit": An example of will do
Personnel psychologists often make a distinction between factors that indicate a person can do a particular task and those that indicate they will do. Usually "can do" facets include things like cognitive and physical abilities--baseline traits that a person must possess to even be able to perform the task. "Will do" facets are related to motivation and interest and get at whether the person is likely to perform the task, regardless of ability.
In the March 2011 issue of Fast Company, Dan and Chip Heath (Switch) write about the concept of "true grit" and its importance for successful performance. They point to the recent movie remake of True Grit. While many might assume the title refers to the crusty gunslinger (Rooster Cogburn), it actually refers to Mattie, a teenage girl who hires Cogburn to avenge her father's death.
The Heaths describe several examples where organizational leaders and innovators refused to give up in the face of failure or long odds and go on to impressive success. They even cite research conducted several years ago by Angela Duckworth and her colleagues who found that scores on a measure of grit predicted retention in West Point (a prestigious U.S. military academy).
What they don't point out is that the retention finding was specific to a summer training program and scores on the measure of grit were not superior to other predictors when the criterion was first-year cadet GPA or performance ratings. In addition, the percentage of variance accounted for across the studies was around 4% and the measure correlated highly with a measure of conscientiousness. However, grit demonstrated incremental validity beyond IQ and conscientiousness and it's still a fascinating study that you can read here.
To be sure, "will do" factors are often overlooked when talking about selection. Often we focus exclusively on ability factors, either because we're unsure how to measure motivational factors or we're afraid to. But that doesn't mean they're not important. We know in some situations (e.g., jobs with low entry and ability requirements) noncognitive measures can out-predict ability measures.
This article also raised two other issue for me. First, laypeople often conceptualize KSAPs as dichotomous: you're either smart or you're not, you either have integrity or you don't. The reality is practically anything you can think of measuring lies on a continuum--so we talk of degrees of personality characteristics, or levels of ability. With respect to the topic of this post the situation is the same: there are shades of grit.
The second issue has to do with having too much of a good thing. One can be too smart for a job; it's not that you can't do it, it's that you'll likely get bored after a short period of time. Similarly, one can have "too much" (or be too far on either end) of a personality continuum. Take grit. Imagine someone who was so determined that not only do they persist in the face of obstacles, they refuse to give up, even when presented with overwhelming odds. Now they're bordering on obsessional and/or delusional.
So what does this all mean? Back to basics:
1) Know the job and its requirements
2) Pick critical, necessary-at-entry KSAPs to measure
3) Select and/or develop high quality measures
4) Know your applicant pool and the likely range of scores you will obtain
5) Recognize that the relationship between tests and job performance is probably not linear (particularly when your concept of job performance is multifaceted)
And finally, back to true grit. The best thing we can do as assessment professionals is demonstrate it ourselves by not taking the easy way out and not folding in front of obstacles such as the desire for speed over quality, or ignorance. No guns required.
Thursday, February 17, 2011
March IJSA: So much good stuff
The March 2011 issue of the International Journal of Selection and Assessment is out, and it's a doozie. Check it out:
- Beaty, et al. present evidence that unproctored Internet tests (noncognitive: personality and biodata) generally had similar criterion-related validities across a spectrum of job performance criteria compared to the same administered in a proctored setting. Mmmm....UIT...
- Generally when people re-take a cognitive ability test, they do better. But do they do better on other tests of cognitive ability? Matton, et al. describe a study that looked at just that and found the answer to be "no."
- One advantage that is frequently claimed about personality inventories is that usage results in less adverse impact compared to, say, ability or knowledge tests. But might the AI depend on how hires are made (e.g., top-down, compensatory)? Turns out the answer is yes--at least according to some results by Risavy and Hausdorf.
- Hey, government agencies, still debating whether to put more resources into your career web portal? Maybe this will convince you. Selden and Orenstein show that governments with more usable portals as well as better available content not only attract more applicants per opening, but have less voluntary turnover of new hires.
- With advances in technology and changes to the work environment, clerical jobs have changed a lot over the last 30 years and the old ways of selecting for these jobs (namely g-loaded tests such as perceptual speed and verbal ability) likely need to be re-thought...right? Well, not so much, at least according to a meta-analysis by Whetzel and her colleagues. In fact, the criterion-related validity values met or exceeded those found 30+ years ago. The more things change...
- De Goede, et al. present the results of a study that explores the relationship between P-O fit and organizational websites but include the concept of person-industry fit. One implication: if you're trying to attract a more diverse group of candidates, work on making your portal more attractive.
- We spend a lot of time trying to make sure interviews are loaded with job-relevant content. But how much attention do we pay to the impact of impression management tactics on the part of applicants? Huffcutt's results make a compelling argument that we ignore the latter to our detriment as it may have more to do with interview ratings than the job-relevant content.
- How does one determine managerial potential? Well, it depends who you ask. Thomason, et al. present results that indicate when supervisors are asked, they focus on task-based personality traits (e.g., conscientiousness), whereas peers focus on contextual traits such as agreeableness. Given that leadership is ultimately about achieving things through subordinates, I wonder what we should be paying attention to....hmmm...how about both?
- Thinking about using self-ratings of political skill as part of the application process? I can certainly see situations where this skill may be helpful, but might this method be susceptible to inflation? Not so much, at least according to results from Blickle, et al.
- Last but definitely not least, Carless & Hetherington with some data on the impact of recruitment timeliness on applicant attraction. The longer we make applicants wait, the less attracted to the organization they will be, right? Not so fast. According to this research, it is perceived timeliness that matters, not actual timeliness (hence the importance of communication). In addition, this relationship is partially mediated by job and organizational characteristics.
- Beaty, et al. present evidence that unproctored Internet tests (noncognitive: personality and biodata) generally had similar criterion-related validities across a spectrum of job performance criteria compared to the same administered in a proctored setting. Mmmm....UIT...
- Generally when people re-take a cognitive ability test, they do better. But do they do better on other tests of cognitive ability? Matton, et al. describe a study that looked at just that and found the answer to be "no."
- One advantage that is frequently claimed about personality inventories is that usage results in less adverse impact compared to, say, ability or knowledge tests. But might the AI depend on how hires are made (e.g., top-down, compensatory)? Turns out the answer is yes--at least according to some results by Risavy and Hausdorf.
- Hey, government agencies, still debating whether to put more resources into your career web portal? Maybe this will convince you. Selden and Orenstein show that governments with more usable portals as well as better available content not only attract more applicants per opening, but have less voluntary turnover of new hires.
- With advances in technology and changes to the work environment, clerical jobs have changed a lot over the last 30 years and the old ways of selecting for these jobs (namely g-loaded tests such as perceptual speed and verbal ability) likely need to be re-thought...right? Well, not so much, at least according to a meta-analysis by Whetzel and her colleagues. In fact, the criterion-related validity values met or exceeded those found 30+ years ago. The more things change...
- De Goede, et al. present the results of a study that explores the relationship between P-O fit and organizational websites but include the concept of person-industry fit. One implication: if you're trying to attract a more diverse group of candidates, work on making your portal more attractive.
- We spend a lot of time trying to make sure interviews are loaded with job-relevant content. But how much attention do we pay to the impact of impression management tactics on the part of applicants? Huffcutt's results make a compelling argument that we ignore the latter to our detriment as it may have more to do with interview ratings than the job-relevant content.
- How does one determine managerial potential? Well, it depends who you ask. Thomason, et al. present results that indicate when supervisors are asked, they focus on task-based personality traits (e.g., conscientiousness), whereas peers focus on contextual traits such as agreeableness. Given that leadership is ultimately about achieving things through subordinates, I wonder what we should be paying attention to....hmmm...how about both?
- Thinking about using self-ratings of political skill as part of the application process? I can certainly see situations where this skill may be helpful, but might this method be susceptible to inflation? Not so much, at least according to results from Blickle, et al.
- Last but definitely not least, Carless & Hetherington with some data on the impact of recruitment timeliness on applicant attraction. The longer we make applicants wait, the less attracted to the organization they will be, right? Not so fast. According to this research, it is perceived timeliness that matters, not actual timeliness (hence the importance of communication). In addition, this relationship is partially mediated by job and organizational characteristics.
Saturday, February 12, 2011
Research update: From item context to signaling theory and more
Here are some research articles from the last couple months:
Grand et al.'s study showed that adding a job-relevant context to test items--even under explicit stereotype threat--had either beneficial or no effects on test performance and test perceptions among female test takers. More evidence of the benefit of tailoring items to the particular position being tested for.
Impression management during assessments is often considered to be a negative thing--i.e., a source of error. But as Kleinmann and Klehe point out in their study of interviewee behavior, it may be an additional source of validity, and can be related to performance. At the very least it indicates that the person knows enough to alter their behavior to fit the job!
Celani and Singh provide a literature review of the role of signaling theory in applicant attraction (making inferences about important aspects of the job/organization from characteristics of the recruitment) and how social identity interacts to impacts attraction outcomes.
Last but not least, Soto et al. with a fascinating study of age and personality characteristics. Over a million individuals participated over the web, and the authors highlight several key results, such as late childhood/early adolescence being key periods for age trends, strong maturity and adjustment trends over adulthood, and the importance of looking at facet-level results.
Grand et al.'s study showed that adding a job-relevant context to test items--even under explicit stereotype threat--had either beneficial or no effects on test performance and test perceptions among female test takers. More evidence of the benefit of tailoring items to the particular position being tested for.
Impression management during assessments is often considered to be a negative thing--i.e., a source of error. But as Kleinmann and Klehe point out in their study of interviewee behavior, it may be an additional source of validity, and can be related to performance. At the very least it indicates that the person knows enough to alter their behavior to fit the job!
Celani and Singh provide a literature review of the role of signaling theory in applicant attraction (making inferences about important aspects of the job/organization from characteristics of the recruitment) and how social identity interacts to impacts attraction outcomes.
Last but not least, Soto et al. with a fascinating study of age and personality characteristics. Over a million individuals participated over the web, and the authors highlight several key results, such as late childhood/early adolescence being key periods for age trends, strong maturity and adjustment trends over adulthood, and the importance of looking at facet-level results.
Sunday, February 06, 2011
We can make assessments "fun"...should we?
Remember when tests were fun?
Neither do I. Tests and assessments have a long history of being about as popular as the dentist. Starting in grade school many come to dread them as lifeless--and often inaccurate--judges of worth. (Of course doing well on them that tends to improve your view)
Tests don't have to be boring. We write structured interview questions and multiple-choice questions because that's what we've always done. And we know how to do it right.
But there are plenty of ways of making them more interesting, from the way they're written (try: "You are in a maze of twisty passages, all alike"), to their presentation (e.g., animation, video), to the way people progress (e.g., adaptive testing), to the way results are given to you ("You've got the high score!"). Today more than ever before we have the flexibility to take those dry, monochromatic presentations and turn them into something eye-catching and even...dare I say...fun?
(10 bonus points to those of you who caught the Adventure reference in the preceding paragraph)
The question is: Should we?
There's been quite a bit written lately about the "gamification" of assessments. Heck, I've been on that bandwagon for years. America's Army, first published in 2002 as the U.S. Army's foray into first-person shooters, was an early example of the potential marriage between staffing and entertainment--yes it's not technically a personnel assessment but the recruiting mission is obvious as is the potential use of the results. Since then we've seen a steady stream of innovation, from the use of branching video to realistic job preview-type assessments presented online.
There are several reasons why we might want to make assessments a wee bit more entertaining:
A) Because we can. Please don't use this reason.
B) As a recruiting tool to help you distinguish yourself from competitors ("Look, we're fun and cool! Join us!")
C) To encourage candidates to complete the assessment ("Yes it's a little long, but the time will fly by!")
D) As part of a realistic job preview ("Not sure if you want the job? Find out virtually!"). Nothing wrong with that. Self-selection out is good.
E) Because it helps us measure more accurately. Ah-ha. Now we're getting somewhere. To the extent that entertainment/interactivity helps us overcome candidate fatigue or other error, or otherwise helps us measure relevant KSAPs more accurately, we've won the game.
(10 bonus points to those of you who saw noticed the irony of me using a traditional multiple-choice presentation in that last part)
(50 bonus points to those of you who have noticed the use of bonus points in this post)
So we have a number of reasons we might want to make assessments more fun. But there are some reasons why we might not want to. Or at least that should make us pause:
1) Tests are serious. No, really. They have an enormous impact on people's life. We don't want to water down their nature so much that we disrespect our applicants.
2) The "tests" out there right now that are the most "fun" are also not ones you'd want to use to select people (e.g., which animal are you?), although there are some surprising hybrids (find your Star Wars twin).*
3) Those assessments that are kinda-pitched-as-actual-assessments-but-not-really-don't-hold-us-to-that, have already started blurring the line (e.g., True Colors). We want to draw a clear distinction between assessments purely for fun and "okay, you need to take this seriously, it's for a job."
4) We can easily mess this up. All it takes is a high-level manager getting into his/her head that we need to "make these test things more fun" and suddenly we're pressured into creating an expensive mess that doesn't deliver (like Daikatana).
(10 bonus points to those of you who noticed I switched response options from letters to numbers)
(50 bonus points to those of you that completed all three of the example assessments)
(100 bonus points if you know what Daikatana is)
So is there room for us to be a little more creative and investigate alternate--more immersive, interactive--ways of assessing candidate qualifications? Absolutely. But should we use caution to make sure we don't have a big-budget flop on our hands? You bet.
Now count up your points from this blog post. How did you do?
0 points: Wait...you did READ this, right?
10-20 points: Okay, maybe you're tired.
20-130 points: The force is strong with you...but you are not a Jedi, yet.
More than 130 points: Call me.
* I'm an owl. Or maybe a penguin. Oh, and kinda like Darth Vader. But also Princess Leia. And Mon Mothma. Now I'm confused.
Wednesday, February 02, 2011
New blog: Select Perspectives
There's a new blog on the block, and this time it's the folks over at Select International and the name of their blog is Select Perspectives. Three posts over a span of one-and-a-half weeks (they started at the end of January) is promising. I particularly enjoyed the post about talking to fifth graders about I/O psychology.
Here's to hopin' that we're witnessing the birth of a valuable addition to our blog roll! Welcome.
Oh, and the RSS feed is riiiggghhht here.
Here's to hopin' that we're witnessing the birth of a valuable addition to our blog roll! Welcome.
Oh, and the RSS feed is riiiggghhht here.
Subscribe to:
Posts (Atom)