Is there a relationship between age and job performance? It's an important question for many reasons, including the fact that claims of age discrimination appear to be on the rise. Ng and Feldman set out to better understand this issue and their meta-analysis is published in the March '08 issue of Journal of Applied Psychology.
Previous research has generally shown a weak relationship between age and job performance--at least when we look at objective measures. But the current authors set out to use a much broader array (10 to be exact) of criterion measures, including workplace aggression, safety performance, and OCBs.
So what did they find? Well, there's where things get a bit complex. Although there did not appear to be a relationship between age and several outcomes, including core task performance, creativity, and performance in training programs, it had stronger relationships with the other seven measures. In addition, age had a curvilinear relationship with core task performance and CWBs, and results varied depending on how the study was conducted.
So does age have a relationship with job performance? Like all important research questions, the answer is an emphatic it depends.
Other articles
There's quite a bit of good research in this volume, including:
- The development of a potentially useful way to predict team member performance
- A fascinating look at how frame-of-reference influences the validity of personality measures (pre-published version here)
- A discussion of the importance of the distinction between constructs (e.g., ability, personality) and methods (e.g., interviews) when comparing predictors in personnel selection
- How to test for adverse impact when your numbers are small (hint: significant testing bad, z-score good)
- Last but not least, a meta-analysis of the relationship direction between attitudes and job performance--what causes what? (hint: attitude matters...but not that much)
Celebrating 10 years of the science and practice of matching employer needs with individual talent.
Monday, March 31, 2008
Thursday, March 27, 2008
IPMAAC Conference Registration Opens
Registration is now open for the 32nd annual IPMAAC conference, to be held in Oakland, California in the beautiful San Francisco Bay Area on June 8-11.
There's an incredible program lined up, including plenary presentations by:
Dr. David Campbell, Fellow at the Center of Creative Leadership, co-author of the popular Strong-Campbell Interest Inventory and author of several books including "If you don't know where you're going, you'll probably end up somwhere else."
Dr. Neal Schmitt, Professor of Psychology and Management at Michigan State University, author, and recognized expert on organizational selection procedures.
Dr. Bob Guion, Professor Emeritus at Bowling Green State University, consultant, and one of the field's most acclaimed authors (including this gem).
Dr. Richard Jeanneret, president of Valtera, expert in individual assessment, and provider of expert witness testimony in litigation.
There will also be several workshops, including ones focused on:
* Assessment centers
* Structured employment interviews
* Job analysis
* Situational judgment tests
In addition, there will be nearly 50 concurrent sessions covering everything from competency modeling to web-based testing. The conference promises to continue to be the annual event for personnel selection professionals who live in the intersection between research and practice.
More details here.
There's an incredible program lined up, including plenary presentations by:
Dr. David Campbell, Fellow at the Center of Creative Leadership, co-author of the popular Strong-Campbell Interest Inventory and author of several books including "If you don't know where you're going, you'll probably end up somwhere else."
Dr. Neal Schmitt, Professor of Psychology and Management at Michigan State University, author, and recognized expert on organizational selection procedures.
Dr. Bob Guion, Professor Emeritus at Bowling Green State University, consultant, and one of the field's most acclaimed authors (including this gem).
Dr. Richard Jeanneret, president of Valtera, expert in individual assessment, and provider of expert witness testimony in litigation.
There will also be several workshops, including ones focused on:
* Assessment centers
* Structured employment interviews
* Job analysis
* Situational judgment tests
In addition, there will be nearly 50 concurrent sessions covering everything from competency modeling to web-based testing. The conference promises to continue to be the annual event for personnel selection professionals who live in the intersection between research and practice.
More details here.
Tuesday, March 25, 2008
Too fat or too thin? You may not get hired.
Job candidates that are either too fat or too thin may have a more a difficult time getting hired than those in the middle weight ranges according to a study by Swami, et al. reported in the most recent issue of the Journal of Applied Social Psychology.
Weighting in line
The authors found that when men were asked to rate a variety of female pictures for either a management position or for providing help (N=30 and 28, respectively), they were less likely to hire or help women with body mass indices (BMI) over 30 or under 15. Those with a slender body (BMI = 19-20) were most likely to be hired or helped. This shouldn't be surprising, given that studies have consistently linked physical attributes, including weight, with employment decisions, but it's certainly a reminder to watch your biases when evaluating candidates!
Predict-ability
In another article, Truxillo et al. found a relationship between cognitive ability and the ability to accurately judge one's performance on an employment test. Using a video-based situational judgment test of customer service skills, the authors found that those with high cognitive ability were able to predict their performance while those with low cognitive ability were not. Practical implications? Providing thorough test feedback may be particularly important for candidates lower in cognitive ability as they may be more likely to be surprised (and dismayed) by the results. This means providing information prior to the test as well as afterward (e.g., how it was developed, how it is scored, how you can improve your performance).
Working IT
In a third study, Johnson, et al. found gender and ethnic group differences in how IT careers are perceived as well as in self-efficacy related to IT. Using data from 159 African- and 98 Anglo-Americans, the authors found that African American men reported higher levels of IT self-efficacy than all other groups, whereas Anglo women reported the lowest levels. In addition, Anglos had more negative stereotypes of IT professionals than did African Americans. This study had a small sample size, but the implication is that how people see their own ability related to an occupation, as well as how they perceive those in it, influences their career choices. This will in turn impact your applicant demographics as well as your recruiting success.
The rest
There are some other interesting reads in here, including:
When emotional displays of leaders may increase follower performance
How to give performance feedback
Self-perceptions of ethical behavior
Weighting in line
The authors found that when men were asked to rate a variety of female pictures for either a management position or for providing help (N=30 and 28, respectively), they were less likely to hire or help women with body mass indices (BMI) over 30 or under 15. Those with a slender body (BMI = 19-20) were most likely to be hired or helped. This shouldn't be surprising, given that studies have consistently linked physical attributes, including weight, with employment decisions, but it's certainly a reminder to watch your biases when evaluating candidates!
Predict-ability
In another article, Truxillo et al. found a relationship between cognitive ability and the ability to accurately judge one's performance on an employment test. Using a video-based situational judgment test of customer service skills, the authors found that those with high cognitive ability were able to predict their performance while those with low cognitive ability were not. Practical implications? Providing thorough test feedback may be particularly important for candidates lower in cognitive ability as they may be more likely to be surprised (and dismayed) by the results. This means providing information prior to the test as well as afterward (e.g., how it was developed, how it is scored, how you can improve your performance).
Working IT
In a third study, Johnson, et al. found gender and ethnic group differences in how IT careers are perceived as well as in self-efficacy related to IT. Using data from 159 African- and 98 Anglo-Americans, the authors found that African American men reported higher levels of IT self-efficacy than all other groups, whereas Anglo women reported the lowest levels. In addition, Anglos had more negative stereotypes of IT professionals than did African Americans. This study had a small sample size, but the implication is that how people see their own ability related to an occupation, as well as how they perceive those in it, influences their career choices. This will in turn impact your applicant demographics as well as your recruiting success.
The rest
There are some other interesting reads in here, including:
When emotional displays of leaders may increase follower performance
How to give performance feedback
Self-perceptions of ethical behavior
Wednesday, March 19, 2008
Does interview coaching increase validity?
Does coaching interview-takers actually increase the predictive validity of the interview? That's certainly what the results of a recent study by Maurer et al. seem to indicate.
In the April issue of the Journal of Organizational Behavior, the researchers describe a predictive study where 146 interviewees (public safety incumbents) were provided with coaching before their situational interview. Importantly, this was coaching designed around the content of the interview and helping candidates communicate during the interview--not generic strategies like "smile a lot."
Results? Predictive validity was higher in the coached sample than an uncoached sample. Why? Well, it makes sense that if candidates are better at addressing the question--both because they have more knowledge and because they're expressing themselves better--you're getting a better view of their true knowledge (i.e., true score) and less interference (i.e., error).
Implications? If you conduct interviews as part of your hiring process (and is there someone out there that doesn't?), strongly consider providing pre-interview coaching (although I might call it something else since coaching sounds a bit suspicious). It may take a bit of your time, but it will pay off in the long run by improving your ability to predict job performance AND candidates will be happier. Big win-win.
The other study to read in this issue is by Becton et al., who looked at performance during and reactions to two selection procedures among White and African-American test-takers. The two tests were a written job knowledge test and a situational interview. The candidates were competing for promotion to Sergeant positions in a police department. Results? Both groups felt the interviews were more job related than the written test. And although African-American candidates performed worse on the written test, they felt that overall both methods were more job related (compared to Whites). Why is this important? Because some have theorized that subgroup differences are related to differences in take taking motivation. This study suggests there's something else going on.
In the April issue of the Journal of Organizational Behavior, the researchers describe a predictive study where 146 interviewees (public safety incumbents) were provided with coaching before their situational interview. Importantly, this was coaching designed around the content of the interview and helping candidates communicate during the interview--not generic strategies like "smile a lot."
Results? Predictive validity was higher in the coached sample than an uncoached sample. Why? Well, it makes sense that if candidates are better at addressing the question--both because they have more knowledge and because they're expressing themselves better--you're getting a better view of their true knowledge (i.e., true score) and less interference (i.e., error).
Implications? If you conduct interviews as part of your hiring process (and is there someone out there that doesn't?), strongly consider providing pre-interview coaching (although I might call it something else since coaching sounds a bit suspicious). It may take a bit of your time, but it will pay off in the long run by improving your ability to predict job performance AND candidates will be happier. Big win-win.
The other study to read in this issue is by Becton et al., who looked at performance during and reactions to two selection procedures among White and African-American test-takers. The two tests were a written job knowledge test and a situational interview. The candidates were competing for promotion to Sergeant positions in a police department. Results? Both groups felt the interviews were more job related than the written test. And although African-American candidates performed worse on the written test, they felt that overall both methods were more job related (compared to Whites). Why is this important? Because some have theorized that subgroup differences are related to differences in take taking motivation. This study suggests there's something else going on.
Monday, March 17, 2008
Grading topgrading
A recent issue of Workforce Magazine highlighted the lifelong work of Brad Smart, who vigorously endorses a method of assessment he calls "Topgrading."
What is topgrading? According to the article (and the website), it's a hiring method that places emphasis on rigorous, structured behavioral interviews using pre-established rating scales in conjunction with in-depth reference checking. The goal is to go beyond normal behavioral interviews, which are susceptible to faking, and ask about each and every full time job.
Coined by Brad Smart, topgrading has been getting more press lately, and Smart claims a history of success with the method, which isn't surprising given that we know that structured interviews are one of the most predictive forms of assessment. The method has many fans, including Jack Welch.
The technique does have its critics. For example, the article quotes a representative from DDI (a competitor) as saying DDI's method is more job-related and "objectively valid." DDI's approach is to use a larger variety of assessments to get a fuller picture of the candidate.
So here's the thing. When we're looking at a hiring process we have a whole menu of choices. We know certain types of tests tend to work well across the board (e.g., cognitive ability, work sample tests) while others typically don't (e.g., interest inventories).
We also know that tailoring the assessment method to fit the requirements of the job is critically important--and a fundamental building block of quality assessment. For example, matching personality requirements with the proper personality inventory makes a huge difference.
So is topgrading the right way to go? You can guess my answer: it depends. For the types of jobs it seems to be used for frequently, C-level positions, it probably does a pretty good job of predicting performance and those candidates may be more willing to sit through a very long interview. For other positions, a wider range of assessment options is probably the better way to go. It all comes back to the results of your job analysis and your candidate pool.
Like most things in life, there is no single one right way, no one answer. And we can't forget that job performance is about much more than just raw talent and focusing strictly on talent can be hazardous for your organization. But ya gotta give a lot of credit to the Smarts for evangelizing high-quality structured interviews.
What is topgrading? According to the article (and the website), it's a hiring method that places emphasis on rigorous, structured behavioral interviews using pre-established rating scales in conjunction with in-depth reference checking. The goal is to go beyond normal behavioral interviews, which are susceptible to faking, and ask about each and every full time job.
Coined by Brad Smart, topgrading has been getting more press lately, and Smart claims a history of success with the method, which isn't surprising given that we know that structured interviews are one of the most predictive forms of assessment. The method has many fans, including Jack Welch.
The technique does have its critics. For example, the article quotes a representative from DDI (a competitor) as saying DDI's method is more job-related and "objectively valid." DDI's approach is to use a larger variety of assessments to get a fuller picture of the candidate.
So here's the thing. When we're looking at a hiring process we have a whole menu of choices. We know certain types of tests tend to work well across the board (e.g., cognitive ability, work sample tests) while others typically don't (e.g., interest inventories).
We also know that tailoring the assessment method to fit the requirements of the job is critically important--and a fundamental building block of quality assessment. For example, matching personality requirements with the proper personality inventory makes a huge difference.
So is topgrading the right way to go? You can guess my answer: it depends. For the types of jobs it seems to be used for frequently, C-level positions, it probably does a pretty good job of predicting performance and those candidates may be more willing to sit through a very long interview. For other positions, a wider range of assessment options is probably the better way to go. It all comes back to the results of your job analysis and your candidate pool.
Like most things in life, there is no single one right way, no one answer. And we can't forget that job performance is about much more than just raw talent and focusing strictly on talent can be hazardous for your organization. But ya gotta give a lot of credit to the Smarts for evangelizing high-quality structured interviews.
Thursday, March 13, 2008
New York City settles discrimination case for $21M
Eliot Spitzer isn't the only one in New York that's paying for mistakes.
New York City has agreed to settle an employment discrimination case that dates back to 1999 for $21 million. This case is particularly interesting given its focus on recruiting practices.
The class action lawsuit was filed by black and Hispanic employees of the Department of Parks and Recreation who complained that the department was illegally discriminating in its promotion and assignment practices.
Specifically:
"The plaintiffs complained that they were bypassed by promotions because of a recruiting program Mr. Stern [the former Parks commissioner] had started to recruit young graduates of elite colleges — nearly all of them white — to fill positions in the agency."
Of the recruiting program, Mr. Stern said:
"The program was to get young college graduates to work long hours at low salaries. The problem was you couldn’t [get] black graduates to work for $22,000 or $25,000, either because they had loans or were offered better jobs by companies that wanted them."
What could the City have done to prevent this situation? Given the actual statistics (the article states 40 of the 179 hired were black or Hispanic), this was likely more about the fairness and perception of the process rather than hiring numbers. A different communication strategy and engagement with current employees likely would have gone a long way toward preventing the complaints.
Note that this lawsuit is separate but related from a one filed in 2002 (that was settled in 2005) which claimed that the department was illegally discriminating by favoring whites for promotion. That suit contended that:
"Time and again...the Parks Department failed to follow any objective guidelines for determining promotions and filling management positions, failed to post notices of job openings, and ‘’rarely, if ever'’ conducted the required interviews for vacancies."
As part of the current settlement agreement, the City agreed to:
"train interviewers to ensure that employees who apply for promotions are treated fairly and objectively; and to examine the process by which managers are selected in the future."
Good lessons for employers everywhere.
New York City has agreed to settle an employment discrimination case that dates back to 1999 for $21 million. This case is particularly interesting given its focus on recruiting practices.
The class action lawsuit was filed by black and Hispanic employees of the Department of Parks and Recreation who complained that the department was illegally discriminating in its promotion and assignment practices.
Specifically:
"The plaintiffs complained that they were bypassed by promotions because of a recruiting program Mr. Stern [the former Parks commissioner] had started to recruit young graduates of elite colleges — nearly all of them white — to fill positions in the agency."
Of the recruiting program, Mr. Stern said:
"The program was to get young college graduates to work long hours at low salaries. The problem was you couldn’t [get] black graduates to work for $22,000 or $25,000, either because they had loans or were offered better jobs by companies that wanted them."
What could the City have done to prevent this situation? Given the actual statistics (the article states 40 of the 179 hired were black or Hispanic), this was likely more about the fairness and perception of the process rather than hiring numbers. A different communication strategy and engagement with current employees likely would have gone a long way toward preventing the complaints.
Note that this lawsuit is separate but related from a one filed in 2002 (that was settled in 2005) which claimed that the department was illegally discriminating by favoring whites for promotion. That suit contended that:
"Time and again...the Parks Department failed to follow any objective guidelines for determining promotions and filling management positions, failed to post notices of job openings, and ‘’rarely, if ever'’ conducted the required interviews for vacancies."
As part of the current settlement agreement, the City agreed to:
"train interviewers to ensure that employees who apply for promotions are treated fairly and objectively; and to examine the process by which managers are selected in the future."
Good lessons for employers everywhere.
Tuesday, March 11, 2008
The diversity-validity dilemma (+ free articles!!)
The latest issue of Personnel Psychology has some great articles in it and right now they're free! So before you do anything else, get while the gettin's good, because normally each article will run ya $30.
So what's in there? The main attraction is a great series of articles on the "diversity-validity" dilemma, which Pyburn, Ployhart, and Kravitz in their article on the legal context, define as:
"The ability of organizations to simultaneously identify high-quality candidates and establish a diverse work force can be hindered by the fact that many of the more predictive selection procedures negatively influence the pass rates of racioethnic minority group members (non-Whites) and women."
This article is a great short read that goes over the major legal points, including adverse impact and the major court cases.
The next article, by Ployhart and Holtz, is a print-and-save type article (yes it's that good) that summarizes the various strategies employers can use to help resolve the dilemma. The article includes a couple of great tables, including one that summarizes most selection mechanisms with their corresponding criterion-related validity and d-values (pp. 155-156) and another that summarizes the various resolutions to the dilemma (pp. 158-163).
Bottom line from that article? I'll let the authors say it:
"Among the most effective strategies, the only strategy that does not also reduce validity is assessing the full range of KSAOs." (bold added)
Hallelujah. Yes, certain assessment methods tend to work better than others (e.g., structured interviews, job knowledge tests) but the best approach is plain old fashioned good practice: Start with job analysis and use the testing methods that best target the knowledge, skills, abilities, and other characteristics (KSAOs) that rise to the top. It really is pretty simple.
The third article in the series is another fabulous one, this time targeting the role that affirmative action (AA) plays in the dilemma.
In it, Kravitz provides a great overview of the basis of AA, attitudes about AA, and provides some answers to some controversial issues, including:
- Does discrimination still occur? (Answer: you bet)
- What is the economic impact of AA on target groups? (A: it's complicated)
- What is the economic impact of AA on organizations? (A: apparently very little)
- Does AA lead to stigmatization of target group members by others? (A: it can)
- Does AA lead to self-stigmatization of target group members? (A: hard to say)
The article then wraps up with some great practical recommendations, the two most important of which are strong, visible, ongoing support of management and the development of an appropriate culture.
Last but not least, don't miss the other great content in this issue, including Mount et al.'s article, Incremental validity of perceptual speed and accuracy over general mental ability and Taylor et al.'s article The transportability of job information across countries.
Now get out there and get some free content!
So what's in there? The main attraction is a great series of articles on the "diversity-validity" dilemma, which Pyburn, Ployhart, and Kravitz in their article on the legal context, define as:
"The ability of organizations to simultaneously identify high-quality candidates and establish a diverse work force can be hindered by the fact that many of the more predictive selection procedures negatively influence the pass rates of racioethnic minority group members (non-Whites) and women."
This article is a great short read that goes over the major legal points, including adverse impact and the major court cases.
The next article, by Ployhart and Holtz, is a print-and-save type article (yes it's that good) that summarizes the various strategies employers can use to help resolve the dilemma. The article includes a couple of great tables, including one that summarizes most selection mechanisms with their corresponding criterion-related validity and d-values (pp. 155-156) and another that summarizes the various resolutions to the dilemma (pp. 158-163).
Bottom line from that article? I'll let the authors say it:
"Among the most effective strategies, the only strategy that does not also reduce validity is assessing the full range of KSAOs." (bold added)
Hallelujah. Yes, certain assessment methods tend to work better than others (e.g., structured interviews, job knowledge tests) but the best approach is plain old fashioned good practice: Start with job analysis and use the testing methods that best target the knowledge, skills, abilities, and other characteristics (KSAOs) that rise to the top. It really is pretty simple.
The third article in the series is another fabulous one, this time targeting the role that affirmative action (AA) plays in the dilemma.
In it, Kravitz provides a great overview of the basis of AA, attitudes about AA, and provides some answers to some controversial issues, including:
- Does discrimination still occur? (Answer: you bet)
- What is the economic impact of AA on target groups? (A: it's complicated)
- What is the economic impact of AA on organizations? (A: apparently very little)
- Does AA lead to stigmatization of target group members by others? (A: it can)
- Does AA lead to self-stigmatization of target group members? (A: hard to say)
The article then wraps up with some great practical recommendations, the two most important of which are strong, visible, ongoing support of management and the development of an appropriate culture.
Last but not least, don't miss the other great content in this issue, including Mount et al.'s article, Incremental validity of perceptual speed and accuracy over general mental ability and Taylor et al.'s article The transportability of job information across countries.
Now get out there and get some free content!
Monday, March 10, 2008
Warning: Warnings may not work
Although faking is a persistent issue in personality testing, no one agrees on the best way to handle it. Some have suggested including "warning" statements in the test: letting applicants know there is a lie scale or some repercussions for false responses. But researchers are far from agreed on this strategy.
Now a new study out in the latest issue of Human Performance adds weight to the argument that warnings may not help us avoid the faking issue.
In the study, researchers had 464 participants fill out personality inventories in either a "warned" or "unwarned" condition. They then looked at the convergence of their scores with scores given to them be observers.
Results? Lower mean scores on some personality dimensions (which is often what happens) but no improvement in the convergence between self- and other-ratings. So in other words, it made a difference, but not a significant one. Implication: simply warning applicants that there are consequences for "inflating" their scores may not do much. Fortunately, it may not matter as well-constructed personality inventories (when used properly) still show useful levels of validity.
The other article in this issue related to assessment looked at the relationship between personality (specifically neuroticism), self-efficacy, gender, and performance (alternate version here). Using data from nearly 900 freshman from 10 different U.S. colleges and universities, the author found several results:
- female participants reported significantly lower levels of emotional stability and (to a lesser extent) self-efficacy--an important consideration if using these scores for selection
- there was a positive relationship between emotional stability and self-efficacy for female participants but the relationship for males was "nearly zero"
- emotional stability and gender interacted to affect self-efficacy which, in turn, affected performance (measured by GPA)
The last point is (to me) the most interesting, as it suggests that personality scores may predict performance indirectly through their relationship with other constructs (in this case, self-efficacy). This suggests another layer of analysis is needed when looking at the utility of personality tests.
Now a new study out in the latest issue of Human Performance adds weight to the argument that warnings may not help us avoid the faking issue.
In the study, researchers had 464 participants fill out personality inventories in either a "warned" or "unwarned" condition. They then looked at the convergence of their scores with scores given to them be observers.
Results? Lower mean scores on some personality dimensions (which is often what happens) but no improvement in the convergence between self- and other-ratings. So in other words, it made a difference, but not a significant one. Implication: simply warning applicants that there are consequences for "inflating" their scores may not do much. Fortunately, it may not matter as well-constructed personality inventories (when used properly) still show useful levels of validity.
The other article in this issue related to assessment looked at the relationship between personality (specifically neuroticism), self-efficacy, gender, and performance (alternate version here). Using data from nearly 900 freshman from 10 different U.S. colleges and universities, the author found several results:
- female participants reported significantly lower levels of emotional stability and (to a lesser extent) self-efficacy--an important consideration if using these scores for selection
- there was a positive relationship between emotional stability and self-efficacy for female participants but the relationship for males was "nearly zero"
- emotional stability and gender interacted to affect self-efficacy which, in turn, affected performance (measured by GPA)
The last point is (to me) the most interesting, as it suggests that personality scores may predict performance indirectly through their relationship with other constructs (in this case, self-efficacy). This suggests another layer of analysis is needed when looking at the utility of personality tests.
Friday, March 07, 2008
Hiring humor
A little Dilbert humor for your Friday...
Increasing self-selection is fine, but you can get carried away
And what will you do when the perfect candidate shows up?
(Side note: I swear that second cartoon rings a bell)
Increasing self-selection is fine, but you can get carried away
And what will you do when the perfect candidate shows up?
(Side note: I swear that second cartoon rings a bell)
Tuesday, March 04, 2008
Hiring a president
In my last post I wrote about how looking at pure length of experience probably isn't the best way to pick someone for a top leadership position--like President of the United States. This comes from decades of research on the relationship between work experience and job performance (interestingly Time magazine just published an article that comes to a similar conclusion using research on expertise).
So how would we go about making the most informed decision if we treated this like a hiring decision rather than an election? We've already heard from the recruitosphere on this issue. Now it's time for an assessment perspective.
We know right off the bat we need some tests to differentiate between the best candidates. And like all hiring decisions, we'd choose tests by starting out with good job analysis data. But unfortunately we don't have any.
"Waddya mean?" you say. "There's lots of experts and articles out there that have documented what makes a good president!" Ahh, yes, but that's not how we would conduct a job analysis for hiring someone. We don't just conduct a literature review, we follow the requirements of the Uniform Guidelines by, among other things, creating detailed statements describing the work to be performed and the knowledge, skills, abilities, and other characteristics (KSAOs) necessary to do it. We also have subject matter experts rate these statements on things like critical, frequency, and necessity at entry to the job.
Can you imagine getting the current incumbent in the same room with previous presidents and conducting a job analysis session?! Count me in on that meeting!
But since that's not going to happen, how can we use assessment research to inform the job of hiring a president? What do years of research tell us about hiring someone who is likely to succeed at this type of job?
Here are the tests I would consider:
- Cognitive ability test. For a complex job like president, high cognitive ability is an absolute must, and research shows ability is the #1 predictor for complex jobs. Unfortunately, we (usually) have a field of very smart applicants, so giving them an ability test might not narrow the field.
- A work sample/performance test. Each candidate is given a live scenario that's pretty close to what they'd face as president. A discussion with a world leader, acting quickly in an emergency, a press conference, or serving as mediator between two disagreeing parties. Sit back and rate the performance using pre-established rating scales.
- A structured interview. This is no softball interview with questions about favorite memories. Each candidate gets the same challenging job-related questions and we have a pre-determined rating scale with benchmarks for judging good answers.
- A job knowledge test. A comprehensive written test covering all of the topics that a president would be expected to know. If you think about it, it's rather scary to think that we hire a president without gauging their full knowledge.
- What about a personality test? This is probably the trickiest (but potentially most interesting) of all the tests. If the job analysis showed that a certain trait, measurable by a reputable instrument, related to success (and some attempts at this have been made at this) we could go forward. Research has indicated that particularly when informed by job analysis, personality tests can have useful levels of performance prediction.
- The best: All of these! Imagine an assessment center-like format where the candidates go through a day-long battery of all these tests. Police officer candidates often have to do it--why not arguably the most powerful position in the world?
What don't you see here? "Debates" that consist mostly of canned phrases, speeches to supporters, and policies that may or may not have been written by the candidate. In other words, most of what we have now. This is similar to hiring someone based purely on a resume they created.
Imagine having all the data that these tests would provide. Talk about an informed hire!
So how would we go about making the most informed decision if we treated this like a hiring decision rather than an election? We've already heard from the recruitosphere on this issue. Now it's time for an assessment perspective.
We know right off the bat we need some tests to differentiate between the best candidates. And like all hiring decisions, we'd choose tests by starting out with good job analysis data. But unfortunately we don't have any.
"Waddya mean?" you say. "There's lots of experts and articles out there that have documented what makes a good president!" Ahh, yes, but that's not how we would conduct a job analysis for hiring someone. We don't just conduct a literature review, we follow the requirements of the Uniform Guidelines by, among other things, creating detailed statements describing the work to be performed and the knowledge, skills, abilities, and other characteristics (KSAOs) necessary to do it. We also have subject matter experts rate these statements on things like critical, frequency, and necessity at entry to the job.
Can you imagine getting the current incumbent in the same room with previous presidents and conducting a job analysis session?! Count me in on that meeting!
But since that's not going to happen, how can we use assessment research to inform the job of hiring a president? What do years of research tell us about hiring someone who is likely to succeed at this type of job?
Here are the tests I would consider:
- Cognitive ability test. For a complex job like president, high cognitive ability is an absolute must, and research shows ability is the #1 predictor for complex jobs. Unfortunately, we (usually) have a field of very smart applicants, so giving them an ability test might not narrow the field.
- A work sample/performance test. Each candidate is given a live scenario that's pretty close to what they'd face as president. A discussion with a world leader, acting quickly in an emergency, a press conference, or serving as mediator between two disagreeing parties. Sit back and rate the performance using pre-established rating scales.
- A structured interview. This is no softball interview with questions about favorite memories. Each candidate gets the same challenging job-related questions and we have a pre-determined rating scale with benchmarks for judging good answers.
- A job knowledge test. A comprehensive written test covering all of the topics that a president would be expected to know. If you think about it, it's rather scary to think that we hire a president without gauging their full knowledge.
- What about a personality test? This is probably the trickiest (but potentially most interesting) of all the tests. If the job analysis showed that a certain trait, measurable by a reputable instrument, related to success (and some attempts at this have been made at this) we could go forward. Research has indicated that particularly when informed by job analysis, personality tests can have useful levels of performance prediction.
- The best: All of these! Imagine an assessment center-like format where the candidates go through a day-long battery of all these tests. Police officer candidates often have to do it--why not arguably the most powerful position in the world?
What don't you see here? "Debates" that consist mostly of canned phrases, speeches to supporters, and policies that may or may not have been written by the candidate. In other words, most of what we have now. This is similar to hiring someone based purely on a resume they created.
Imagine having all the data that these tests would provide. Talk about an informed hire!
Saturday, March 01, 2008
Does experience matter?
There's been a lot of talk this election season about whether experience matters when it comes to the job of U.S. president.
There's been a lot of back and forth, but I haven't heard a lot of discussion about whether there's any research related to the question. A lot of folks might be surprised to learn there's actually quite a bit of research directly related to this point. And we can use it to inform decisions like selecting a president--or any other leader for that matter.
So what does the research say? The best sources of research and analysis on this topic (e.g., this one and this one among others) have reached some general conclusions:
1. The most important conclusion is that the answer depends on how you define both experience (e.g., amount, time, type) and job performance (e.g., task, contextual) and the particular job you're looking at. There is no single answer.
2. Experience does predict job performance, but not as well as, say, cognitive ability--this is particularly true for high-complexity jobs.
3. Length of experience best predicts job performance when incumbents have relatively low amounts of experience (e.g., entry-level jobs).
4. Length of experience best predicts job performance when the job is low-complexity. At high levels of complexity it does significantly worse at predicting performance. After, say, about 5 years of experience, more doesn't seem to add anything to predicting performance.
5. Prediction is increased when we look at amount of experience performing particular tasks rather than length of experience. This makes sense--just because someone's held a job for 20 years doesn't mean they've performed the tasks you're interested in (and done them well).
So what does all this mean for, say, choosing a president? I'm afraid the answer is not simple, which is as it should be. Pure amount of experience doesn't appear to be all that important after a few years (although this is difficult to analyze since there's only one incumbent at a time!). Ultimately the question is what type of experience is important--and THAT question hasn't been answered.
For a highly complex job like president, simply looking at experience does not seem the best way to measure and predict performance. So from a personnel assessment standpoint, how would we hypothetically select a president? I'll cover that in my next post: Hiring a President.
There's been a lot of back and forth, but I haven't heard a lot of discussion about whether there's any research related to the question. A lot of folks might be surprised to learn there's actually quite a bit of research directly related to this point. And we can use it to inform decisions like selecting a president--or any other leader for that matter.
So what does the research say? The best sources of research and analysis on this topic (e.g., this one and this one among others) have reached some general conclusions:
1. The most important conclusion is that the answer depends on how you define both experience (e.g., amount, time, type) and job performance (e.g., task, contextual) and the particular job you're looking at. There is no single answer.
2. Experience does predict job performance, but not as well as, say, cognitive ability--this is particularly true for high-complexity jobs.
3. Length of experience best predicts job performance when incumbents have relatively low amounts of experience (e.g., entry-level jobs).
4. Length of experience best predicts job performance when the job is low-complexity. At high levels of complexity it does significantly worse at predicting performance. After, say, about 5 years of experience, more doesn't seem to add anything to predicting performance.
5. Prediction is increased when we look at amount of experience performing particular tasks rather than length of experience. This makes sense--just because someone's held a job for 20 years doesn't mean they've performed the tasks you're interested in (and done them well).
So what does all this mean for, say, choosing a president? I'm afraid the answer is not simple, which is as it should be. Pure amount of experience doesn't appear to be all that important after a few years (although this is difficult to analyze since there's only one incumbent at a time!). Ultimately the question is what type of experience is important--and THAT question hasn't been answered.
For a highly complex job like president, simply looking at experience does not seem the best way to measure and predict performance. So from a personnel assessment standpoint, how would we hypothetically select a president? I'll cover that in my next post: Hiring a President.
Subscribe to:
Posts (Atom)