At the end of this, the first full year of this blog's existence, I decided to take a look back at 2007 and give you my Top 5 most popular posts of the year:
1. Jobfox plays matchmaker (there continues to be significant interest in Jobfox and their non-traditional approach to matching applicants with employers)
2. Reliability and validity--it's okay to Despair(.com). Whether it's the statistics words or Despair, I'll never know. But people sure like those little posters (and remember, you can make your own).
3. Personality testing basics (Part 2). As you can see from the sidebar survey, folks continue to be very interested in personality testing.
4. Wonderlic revises their cognitive ability test. Wonderlic, one of the oldest and most famous testing companies, continues to generate interest.
5. Checkster and SkillSurvey automate reference checking. There's further development to be had, but I do believe these tools could be a boon to HR and supervisors alike.
Okay, so enough about me. What about what everyone else is writing about? Here are my nominations for content of the year:
1. Morgeson et al. fired a shot across the bow of personality testing with their piece in Personnel Psychology that resulted in multiple, shall we say, not so thrilled responses. I don't know where this debate is going (although I suspect alternate measurement methods will play a part) but it sure is fun to watch!
2. There were some great books I came across this year. Particular props for Understanding statistics, Evidence-based management, and Personality and the fate of organizations. Yes, they were all published in 2006...are you saying I'm behind?
3. Dineen et al.'s great little piece of research on P-O fit and website design in the March issue of J.A.P. that I wrote about here. Take a look at your career website with these results in mind.
4. The Talent Unconference was a big success, and I'm very thankful that many of the presentations were videotaped; I put up links to some of them here
5. McDaniel et al.'s meta-analysis of situational judgment test instructions. Not only is this a great piece of research, it's (still) free!
So what about my New Years wish from last year? I'm still waiting. Although if people search databases like Spock eventually get up enough steam...perhaps I'll get my wish?
Here's to hoping 2008 is filled with interesting and useful things!
Celebrating 10 years of the science and practice of matching employer needs with individual talent.
Monday, December 31, 2007
Friday, December 28, 2007
Sunday, December 23, 2007
Links a go-go
A smörgåsbord of good reading for this holiday week, 2007:
More evidence that IQ has a environmental component
Teens creating Internet content in greater numbers
New law increases mandatory retirement age for commercial pilots
Wednesday best day of week to send out recruitment e-mail (hat tip)
How to improve your corporate career website (Part I, Part II, Part III, Part IV)
Ford, et al. wrap up EEOC lawsuit over written exam
Free ATS! No, really! (Okay, there are several installation steps)
...and last but not least, for those of you doing major shopping this week:
Recruiting at the mall (I've wondered why more orgs don't do this!)
More evidence that IQ has a environmental component
Teens creating Internet content in greater numbers
New law increases mandatory retirement age for commercial pilots
Wednesday best day of week to send out recruitment e-mail (hat tip)
How to improve your corporate career website (Part I, Part II, Part III, Part IV)
Ford, et al. wrap up EEOC lawsuit over written exam
Free ATS! No, really! (Okay, there are several installation steps)
...and last but not least, for those of you doing major shopping this week:
Recruiting at the mall (I've wondered why more orgs don't do this!)
Monday, December 17, 2007
Call for Proposals for 2008 IPMAAC Conference
You're invited to submit a proposal for the 2008 IPMAAC Conference, to be held on June 8-11. The theme of next year's conference is Building Bridges: Recruitment, Selection, and Assessment and the conference will be in Oakland, California in the beautiful San Francisco Bay Area.
IPMAAC (IPMA-HR Assessment Council) is the leading organization for HR assessment and selection professionals interested in such diverse topics as job analysis, personality testing, recruiting, and web-based assessment.
IPMAAC members are psychologists, attorneys, consultants, management consultants, academic faculty and students, and other professionals who share a passion for finding and hiring the right person for the job. Whether you're from the public sector, private sector, or non-profit, everyone is welcome.
Please consider submitting a proposal for the 2008 conference--it could be a presentation, a tutorial, a panel discussion, a symposium, or a pre-conference workshop. If you have something you'd like to share, don't be shy.
Even if you don't submit a proposal, plan on attending the conference. It's a great opportunity to meet knowledgeable and passionate people and learn what everyone else is doing out there. Some presentations at last year's conference, held in New Orleans, included:
Succession planning and talent management
Using personality assessments in community service organizations
Legal update
Potholes on the road to Internet Applicant compliance
See you there!
IPMAAC (IPMA-HR Assessment Council) is the leading organization for HR assessment and selection professionals interested in such diverse topics as job analysis, personality testing, recruiting, and web-based assessment.
IPMAAC members are psychologists, attorneys, consultants, management consultants, academic faculty and students, and other professionals who share a passion for finding and hiring the right person for the job. Whether you're from the public sector, private sector, or non-profit, everyone is welcome.
Please consider submitting a proposal for the 2008 conference--it could be a presentation, a tutorial, a panel discussion, a symposium, or a pre-conference workshop. If you have something you'd like to share, don't be shy.
Even if you don't submit a proposal, plan on attending the conference. It's a great opportunity to meet knowledgeable and passionate people and learn what everyone else is doing out there. Some presentations at last year's conference, held in New Orleans, included:
Succession planning and talent management
Using personality assessments in community service organizations
Legal update
Potholes on the road to Internet Applicant compliance
See you there!
Friday, December 14, 2007
Links a go-go
Good reading for December 14, 2007:
Why your company needs to be on Facebook
Target Corp. to pay $510K for race discrimination in application process
Check out the U.S. Office of Personnel Management's virtual conference
Monster launches website targeting Hispanic applicants
SIOP's December, 2007 newsletter
Is your organization green? Candidates want to know.
Using social networking sites to reach out to entry-level hires
The importance of background checks (white paper)
Why your company needs to be on Facebook
Target Corp. to pay $510K for race discrimination in application process
Check out the U.S. Office of Personnel Management's virtual conference
Monster launches website targeting Hispanic applicants
SIOP's December, 2007 newsletter
Is your organization green? Candidates want to know.
Using social networking sites to reach out to entry-level hires
The importance of background checks (white paper)
Wednesday, December 12, 2007
HR.com's Virtual Conference
If you want to see something creative, head on over to HR.com's virtual conference called VIEW, which is taking place today and tomorrow.
HR.com says this conference, which is very Second Life-ish, will have 40+ speakers, 1000+ attendees and 70+ vendors.
Right now I'm watching Carly Fiorina talk about leadership. Later presentations include:
Managing a Talent Pool for Succession Planning
The Federal E-Verify Program and Electronic I-9 Compliance
Quality of Hire
Creating Value Exchange in the Candidate Experience
...and a lot more. Creative stuff! And oh yah, it's free.
HR.com says this conference, which is very Second Life-ish, will have 40+ speakers, 1000+ attendees and 70+ vendors.
Right now I'm watching Carly Fiorina talk about leadership. Later presentations include:
Managing a Talent Pool for Succession Planning
The Federal E-Verify Program and Electronic I-9 Compliance
Quality of Hire
Creating Value Exchange in the Candidate Experience
...and a lot more. Creative stuff! And oh yah, it's free.
Monday, December 10, 2007
November '07 Issue of J.A.P.
The November 2007 issue of the Journal of Applied Psychology is full of interesting articles, including several relating to recruiting and assessment. Let's take a look:
First, a field study by Hebl et al. on pregnancy discrimination. Female confederates posed as job applicants or customers at retail stores, sometimes wearing a pregnancy prosthesis. As "pregnant" customers, they received more "benevolent" behavior (e.g., touching and over-friendliness), but as job applicants they received more hostile behavior (e.g., rudeness). The latter effect was particularly noticeable when confederates applied for stereotypically male jobs. This isn't a form of discrimination that gets as much play as others, but may be much more common than we think. My guess is a lot of people associate pregnancy with impending time off and don't focus as much on the competencies these women bring to the job.
Second, a study on faking. But wait, not faking on personality tests, faking during interviews. Levashina and Campion developed an interview faking behavior scale and then tested it with actual interviews. Guess what? Scores on the scale correlated with getting a second interview. (Looks like those classes you took on answering vaguely are going to pay off!) But wait, there's more. The authors also found that behavioral questions were more resistant to faking than situational questions (another reason to use 'em!), and follow-up questions INCREASED faking (another reason NOT to 'use em!). Other goodies in this article: over 90% of undergraduate job candidates fake during employment interviews (I assume that's just this sample), BUT, the percentage that were actually lying, or close to it, was less (28-75%).
Third, Brockner et al. provide research results that underline how important procedural fairness (justice) is. Three empirical studies demonstrated that employees judge organizations as being more responsible for negative outcomes when they experienced low procedural fairness. So when applicants or employees get bad news, they'll blame the organization even more if they feel the process used was unfair. Why do we care? Because perceptions of procedural fairness impact all kinds of things, including recruiting (e.g., how someone reacts to not getting a job) and the likelihood of filing a lawsuit (for, say, discrimination).
Fourth, Lievens, Reeve and Heggestad with a look at the impact of people re-taking cognitive ability tests. Using a sample of 941 candidates for medical school that took an admissions exam with a cognitive component, the authors found that retesting introduced both measurement and predictive bias: the retest scores appeared to be measuring memory rather than g, and predictive validity (of GPA) was eliminated. More evidence that re-testing effects are non-trivial. Pre-publication version here.
Last but definitely not least, one of my favorite topics--web-based recruitment. Allen, Mahto, & Otondo present results from 814 students searching real websites. When controlling for a student's image of the employer, job and organizational information correlated with their intention to pursue employment. When controlling for information search, a student's image of the employer was related to the intention to pursue employment, but familiarity with the employer was not. Finally, attitudes about recruitment source influenced attraction and partially mediated the effects of organizational information. What does all this mean? Don't throw your eggs into one basket--organizational image is important, but so is the specific information you have on your website about your organization and the specific job.
There's a lot of other good stuff in this volume, including articles on the financial impact of specific HRM practices, a meta-analysis of telecommuting impacts, engaging older workers, and daily mood.
First, a field study by Hebl et al. on pregnancy discrimination. Female confederates posed as job applicants or customers at retail stores, sometimes wearing a pregnancy prosthesis. As "pregnant" customers, they received more "benevolent" behavior (e.g., touching and over-friendliness), but as job applicants they received more hostile behavior (e.g., rudeness). The latter effect was particularly noticeable when confederates applied for stereotypically male jobs. This isn't a form of discrimination that gets as much play as others, but may be much more common than we think. My guess is a lot of people associate pregnancy with impending time off and don't focus as much on the competencies these women bring to the job.
Second, a study on faking. But wait, not faking on personality tests, faking during interviews. Levashina and Campion developed an interview faking behavior scale and then tested it with actual interviews. Guess what? Scores on the scale correlated with getting a second interview. (Looks like those classes you took on answering vaguely are going to pay off!) But wait, there's more. The authors also found that behavioral questions were more resistant to faking than situational questions (another reason to use 'em!), and follow-up questions INCREASED faking (another reason NOT to 'use em!). Other goodies in this article: over 90% of undergraduate job candidates fake during employment interviews (I assume that's just this sample), BUT, the percentage that were actually lying, or close to it, was less (28-75%).
Third, Brockner et al. provide research results that underline how important procedural fairness (justice) is. Three empirical studies demonstrated that employees judge organizations as being more responsible for negative outcomes when they experienced low procedural fairness. So when applicants or employees get bad news, they'll blame the organization even more if they feel the process used was unfair. Why do we care? Because perceptions of procedural fairness impact all kinds of things, including recruiting (e.g., how someone reacts to not getting a job) and the likelihood of filing a lawsuit (for, say, discrimination).
Fourth, Lievens, Reeve and Heggestad with a look at the impact of people re-taking cognitive ability tests. Using a sample of 941 candidates for medical school that took an admissions exam with a cognitive component, the authors found that retesting introduced both measurement and predictive bias: the retest scores appeared to be measuring memory rather than g, and predictive validity (of GPA) was eliminated. More evidence that re-testing effects are non-trivial. Pre-publication version here.
Last but definitely not least, one of my favorite topics--web-based recruitment. Allen, Mahto, & Otondo present results from 814 students searching real websites. When controlling for a student's image of the employer, job and organizational information correlated with their intention to pursue employment. When controlling for information search, a student's image of the employer was related to the intention to pursue employment, but familiarity with the employer was not. Finally, attitudes about recruitment source influenced attraction and partially mediated the effects of organizational information. What does all this mean? Don't throw your eggs into one basket--organizational image is important, but so is the specific information you have on your website about your organization and the specific job.
There's a lot of other good stuff in this volume, including articles on the financial impact of specific HRM practices, a meta-analysis of telecommuting impacts, engaging older workers, and daily mood.
Wednesday, December 05, 2007
EEOC Issues Fact Sheet on Employment Testing
On Monday, the U.S. Equal Employment Opportunity Commission (EEOC) issued a fact sheet on employment testing.
The fact sheet covers, at a high level, various topics including:
- types of tests
- related EEO laws (Title VII, ADA, ADEA) and burden-shifting scenarios
- the Uniform Guidelines on Employee Selection Procedures
- recent related EEOC litigation and settlements
- employer best practices
For those of you familiar with the legal context of employment testing, this isn't new information. But it is a nice quick summary of some of the major points, and could be very useful for someone not as familiar with this area.
For a more thorough treatment, I recommend the U.S. Department of Labor's guide for employers on testing and selection.
The fact sheet covers, at a high level, various topics including:
- types of tests
- related EEO laws (Title VII, ADA, ADEA) and burden-shifting scenarios
- the Uniform Guidelines on Employee Selection Procedures
- recent related EEOC litigation and settlements
- employer best practices
For those of you familiar with the legal context of employment testing, this isn't new information. But it is a nice quick summary of some of the major points, and could be very useful for someone not as familiar with this area.
For a more thorough treatment, I recommend the U.S. Department of Labor's guide for employers on testing and selection.
Monday, December 03, 2007
Winter '07 Personnel Psychology
Things are starting to heat up in the journal Personnel Psychology. The shot across the bow of personality testing that happened in the last issue of Personnel Psychology turns into a full-blown brawl in this issue. But first, let's not forget another article worth out attention...
First up, Berry, Sackett, and Landers revisit the issue of the correlation between interview and cognitive ability scores. Previous meta-analyses have found this value to be somewhere between .30 and .40. Using an updated data set, excluding samples in which interviewers likely had access to ability scores, and more accurately calculating range restriction, the authors calculate a corrected r of .29 based on the entire applicant pool. This correlation is even smaller when interview structure is high, when the interview is behavioral description rather than situational or composite, and job complexity is high. Why is this important? Because it impacts what other tests you might want to use--the authors point out that using their updated numbers they obtained a multiple correlation of .66 for a high structure interview combined with a cognitive ability test (using Schmidt & Hunters' methods and numbers). Pretty darn impressive.
Now that we have that under our belt, ready for the main event? As I said, in last issue Morgeson et al. came out quite strongly against the use of self-report personality tests in selection contexts--primarily because they claim the uncorrected criterion-related validity coefficients are so small. So it's not surprising that this edition contains two articles by personality researcher heavyweights defending their turf...
First, Tett & Christiansen raise several points; more than I have space for here. Some points include: considering conditions under which personality tests are used and validity coefficients aggregated; that there are occupational differences to consider; that coefficients found so far aren't as high as they could be if we used more sophisticated approaches like personality-oriented job analysis; and that coefficients increase when multiple trait measures are used. This sums their points up nicely: "Overall mean validity ignores situational specificity and can seriously underestimate validity possible under theoretically and professional prescribed conditions."
Second, Ones, Dilchert, Viswesvaran, and Judge come out swinging and make several arguments, including: conclusions should be based on corrected coefficients; coefficients are on par with other frequently used predictors, some of which are much more costly to develop (e.g., assessment centers, biodata); different combinations of Big 5 factors are optimal depending upon the occupation; and compound personality variables should be considered (e.g., integrity). Suggestions include developing more other-ratings instruments and investigating non-linear effects (hallelujah), dark side traits, and interactions. They sum up: "Any selection decision that does not take the key personality characteristics of job applicants into account would be deficient."
Not to be out-pulpited (yes, you can use that phrase), Morgeson et al. come back with a response to the above two articles, reiterating how correct they were the first time around. They state that much of what the authors of the above articles wrote was "tangential, if not irrelevant", that with respect to the ideas for increasing coefficients, "the cumulative data on these 'improvements' is not great", and that corrected Rs presented by Ones et al. aren't impressive when compared to other predictors. They point out some flaws of personality tests (applicants can find them confusing and offensive) but fail to mention that ability tests aren't everyone's favorite test either. They claim that job performance is the primary criterion we should be interested in (which IMHO is a bit short-sighted), and that corrections of coefficients are controversial.
So where are we? Honestly I think these fine folks are talking past each other in some respects. Some issues (e.g., adverse impact) don't even come up, while other issues (e.g., faking) are given way too much attention. It's difficult to compare the arguments side by side because each article is organized differently. It doesn't help that the people on both sides are some of the researchers with the most invested (and most to lose) by arguing their particular side.
I'm thinking what's needed here is an outside perspective. Here's my two cents: this isn't an easy issue. Criteria are never "objective." Job performance is not a singular construct. Job complexity has a huge impact on the appropriate selection device(s). And organizations are, frankly, not using cognitive or ability tests nearly as much as they are conducting interviews. So let's stop focusing on which type of test is "better" than the others. Frankly, that's cognitive laziness.
So is this just sound and fury, signifying nothing? No, because people are interested in personality testing. Hiring supervisors are convinced that it takes more than raw ability to do a job. We shouldn't ignore the issue. Instead we should be focusing on providing sound advice for practitioners and treating other researchers with respect and attention.
Should you use personality tests? I'll answer that question with more questions: what does the job analysis say? What does your applicant pool look like? What are your resources like? It's not something you want to use cookie-cutter, but not something you should write off completely.
Okay, I'm off my soap box. Last but not least there are some good book reviews in this issue. One is Bob Hogan's book (which I enjoyed immensely and actually finished which is rare for me), Personality and the Fate of Organizations, which the reviewer recommends; another is Alternative Validation Strategies, which the reviewer highly recommends; and the third for us is Recruiting, Interviewing, Selecting, & Orienting New Employees by Diane Arthur, which the reviewer...well...sort of recommends--for broad HR practitioners.
That's all folks!
First up, Berry, Sackett, and Landers revisit the issue of the correlation between interview and cognitive ability scores. Previous meta-analyses have found this value to be somewhere between .30 and .40. Using an updated data set, excluding samples in which interviewers likely had access to ability scores, and more accurately calculating range restriction, the authors calculate a corrected r of .29 based on the entire applicant pool. This correlation is even smaller when interview structure is high, when the interview is behavioral description rather than situational or composite, and job complexity is high. Why is this important? Because it impacts what other tests you might want to use--the authors point out that using their updated numbers they obtained a multiple correlation of .66 for a high structure interview combined with a cognitive ability test (using Schmidt & Hunters' methods and numbers). Pretty darn impressive.
Now that we have that under our belt, ready for the main event? As I said, in last issue Morgeson et al. came out quite strongly against the use of self-report personality tests in selection contexts--primarily because they claim the uncorrected criterion-related validity coefficients are so small. So it's not surprising that this edition contains two articles by personality researcher heavyweights defending their turf...
First, Tett & Christiansen raise several points; more than I have space for here. Some points include: considering conditions under which personality tests are used and validity coefficients aggregated; that there are occupational differences to consider; that coefficients found so far aren't as high as they could be if we used more sophisticated approaches like personality-oriented job analysis; and that coefficients increase when multiple trait measures are used. This sums their points up nicely: "Overall mean validity ignores situational specificity and can seriously underestimate validity possible under theoretically and professional prescribed conditions."
Second, Ones, Dilchert, Viswesvaran, and Judge come out swinging and make several arguments, including: conclusions should be based on corrected coefficients; coefficients are on par with other frequently used predictors, some of which are much more costly to develop (e.g., assessment centers, biodata); different combinations of Big 5 factors are optimal depending upon the occupation; and compound personality variables should be considered (e.g., integrity). Suggestions include developing more other-ratings instruments and investigating non-linear effects (hallelujah), dark side traits, and interactions. They sum up: "Any selection decision that does not take the key personality characteristics of job applicants into account would be deficient."
Not to be out-pulpited (yes, you can use that phrase), Morgeson et al. come back with a response to the above two articles, reiterating how correct they were the first time around. They state that much of what the authors of the above articles wrote was "tangential, if not irrelevant", that with respect to the ideas for increasing coefficients, "the cumulative data on these 'improvements' is not great", and that corrected Rs presented by Ones et al. aren't impressive when compared to other predictors. They point out some flaws of personality tests (applicants can find them confusing and offensive) but fail to mention that ability tests aren't everyone's favorite test either. They claim that job performance is the primary criterion we should be interested in (which IMHO is a bit short-sighted), and that corrections of coefficients are controversial.
So where are we? Honestly I think these fine folks are talking past each other in some respects. Some issues (e.g., adverse impact) don't even come up, while other issues (e.g., faking) are given way too much attention. It's difficult to compare the arguments side by side because each article is organized differently. It doesn't help that the people on both sides are some of the researchers with the most invested (and most to lose) by arguing their particular side.
I'm thinking what's needed here is an outside perspective. Here's my two cents: this isn't an easy issue. Criteria are never "objective." Job performance is not a singular construct. Job complexity has a huge impact on the appropriate selection device(s). And organizations are, frankly, not using cognitive or ability tests nearly as much as they are conducting interviews. So let's stop focusing on which type of test is "better" than the others. Frankly, that's cognitive laziness.
So is this just sound and fury, signifying nothing? No, because people are interested in personality testing. Hiring supervisors are convinced that it takes more than raw ability to do a job. We shouldn't ignore the issue. Instead we should be focusing on providing sound advice for practitioners and treating other researchers with respect and attention.
Should you use personality tests? I'll answer that question with more questions: what does the job analysis say? What does your applicant pool look like? What are your resources like? It's not something you want to use cookie-cutter, but not something you should write off completely.
Okay, I'm off my soap box. Last but not least there are some good book reviews in this issue. One is Bob Hogan's book (which I enjoyed immensely and actually finished which is rare for me), Personality and the Fate of Organizations, which the reviewer recommends; another is Alternative Validation Strategies, which the reviewer highly recommends; and the third for us is Recruiting, Interviewing, Selecting, & Orienting New Employees by Diane Arthur, which the reviewer...well...sort of recommends--for broad HR practitioners.
That's all folks!
Tuesday, November 27, 2007
Old-school competencies
Recently I went with family to the old Empire Mine in the gold country of California. It's one of the oldest, deepest, and richest mines in California. It produced 5.6 million ounces of gold before it closed in 1956.
In the display cases I happened to notice a brief description of the typical tasks and competencies required of a miner (a mini-job analysis if you will):
How would you have selected people for this job? Physical ability test? How about some type of personality test that measured flexibility and stress resistance? Might a realistic job preview have been a good idea?
Note it includes the traditional "what would happen if this job was done incorrectly..." question. This takes on increased meaning when you look at the shaft that the miners traveled down every day on their "way to work":
You may not be able to tell, but that sucker's pretty much straight down.
Suddenly my cubicle's looking a lot better.
In the display cases I happened to notice a brief description of the typical tasks and competencies required of a miner (a mini-job analysis if you will):
How would you have selected people for this job? Physical ability test? How about some type of personality test that measured flexibility and stress resistance? Might a realistic job preview have been a good idea?
Note it includes the traditional "what would happen if this job was done incorrectly..." question. This takes on increased meaning when you look at the shaft that the miners traveled down every day on their "way to work":
You may not be able to tell, but that sucker's pretty much straight down.
Suddenly my cubicle's looking a lot better.
Wednesday, November 21, 2007
Applied Workforce Planning: Air Traffic Controllers
There's plenty of disagreement over whether the aging of the U.S. workforce will indeed spark a wave of simultaneous retirements and thus a scramble to replace employees. Personally, I think it's going to depend on a lot of factors--the housing market, health care costs, and technological advancements to name a few.
But there are instances that demonstrate what it could look like if a significant portion of the U.S. workforce retires. One of those instances is the air traffic controllers who work for the Federal Aviation Administration (FAA) and were the subject of a recent piece on NPR. Let's look at the situation and the implications:
- In the summer of 1981, President Ronald Reagan replaced 12,000 striking air traffic controllers all at once. When you make such a large number of hires, in the same classification, with a lot of them probably being around the same age, in public sector (where folks often stick around for a long time), you better be thinking about workforce planning.
- Thirty percent more controllers retired last year than the Federal Aviation Administration (FAA) predicted. That's significant, and whether it's due to insufficient planning or not, it provides an example of the scope of the problem we're facing.
- Last year the FAA imposed a new labor contract on the controllers which lowered pay for new hires, froze pay for those with longevity, and placed new restrictions on the working environment. Not a great way to attract new candidates into a field where you need a high number of replacements in a short period of time.
- According to the union, employees feel stretched and burned out, which may lead to a serious accident. Not the type of publicity that demonstrates an outstanding employer brand.
Lessons? Like politics, all workforce planning is local. And yes, it's an inexact science. But sometimes the numbers just hit you over the head and demand attention. Do you have a situation like this in your organization? If so, what are you doing about it?
But there are instances that demonstrate what it could look like if a significant portion of the U.S. workforce retires. One of those instances is the air traffic controllers who work for the Federal Aviation Administration (FAA) and were the subject of a recent piece on NPR. Let's look at the situation and the implications:
- In the summer of 1981, President Ronald Reagan replaced 12,000 striking air traffic controllers all at once. When you make such a large number of hires, in the same classification, with a lot of them probably being around the same age, in public sector (where folks often stick around for a long time), you better be thinking about workforce planning.
- Thirty percent more controllers retired last year than the Federal Aviation Administration (FAA) predicted. That's significant, and whether it's due to insufficient planning or not, it provides an example of the scope of the problem we're facing.
- Last year the FAA imposed a new labor contract on the controllers which lowered pay for new hires, froze pay for those with longevity, and placed new restrictions on the working environment. Not a great way to attract new candidates into a field where you need a high number of replacements in a short period of time.
- According to the union, employees feel stretched and burned out, which may lead to a serious accident. Not the type of publicity that demonstrates an outstanding employer brand.
Lessons? Like politics, all workforce planning is local. And yes, it's an inexact science. But sometimes the numbers just hit you over the head and demand attention. Do you have a situation like this in your organization? If so, what are you doing about it?
Thursday, November 15, 2007
You don't know you
In a recent interview with Gallup Management Journal, Cornell psychologist David Dunning talked about why people aren't very good at judging themselves. Why is this important? Because it has a great deal to do with how we recruit and assess applicants.
A big part of the reason we're so bad at accurately judging ourselves is due to our self-serving bias--our tendency to take credit for successes but blame outside factors (other people, equipment) for our failures. This helps our ego out--if we were always blaming ourselves for failures and attributing success to outside factors, we wouldn't be very happy campers. But it has the downside of oftentimes blinding us to the real reason why things happen.
Dr. Dunning covers a wide range of topics in the interview, including gender differences, when overconfidence may be a good thing, employee training, providing feedback, and the serious implications of this phenomenon (e.g., think about doctors judging their skills as being better than they are).
As I said, this is important because a great deal of recruitment and selection is about self-assessment--a prime example is the growing movement toward online training and experience (T&E) questionnaires made easier with the spread of ATS products. Many of these questionnaires are chock full of questions that (no joke) aren't much different than: "How great are you at X?"
But it's not just about T&Es. People make judgments about themselves when deciding what jobs to apply for in the first place. They describe themselves in certain ways during job interviews (when the motivation to make yourself look good is even stronger).
What can we do about it? Simply put, verify, verify, verify. If someone claims to be the greatest Java programmer on the planet, make them show you. If they claim to be a great orator, make an oral presentation part of the hiring process. Then talk to folks that know their work to establish a history of competence. Don't take someone's word at face value because (a) they may be trying to snow you, but more subtly (b) they may not know themselves.
By the way, Dr. Dunning is co-author of one of my all-time favorite articles, "Flawed self-assessment: Implications for health, education, and the workplace" which describes how the inability of people to judge themselves accurately can result in very serious problems.
Last thing: if you're not already familiar with it, check out the fundamental attribution error, which is one of the other big things our brain is constantly doing. It has huge implications for how we judge others.
A big part of the reason we're so bad at accurately judging ourselves is due to our self-serving bias--our tendency to take credit for successes but blame outside factors (other people, equipment) for our failures. This helps our ego out--if we were always blaming ourselves for failures and attributing success to outside factors, we wouldn't be very happy campers. But it has the downside of oftentimes blinding us to the real reason why things happen.
Dr. Dunning covers a wide range of topics in the interview, including gender differences, when overconfidence may be a good thing, employee training, providing feedback, and the serious implications of this phenomenon (e.g., think about doctors judging their skills as being better than they are).
As I said, this is important because a great deal of recruitment and selection is about self-assessment--a prime example is the growing movement toward online training and experience (T&E) questionnaires made easier with the spread of ATS products. Many of these questionnaires are chock full of questions that (no joke) aren't much different than: "How great are you at X?"
But it's not just about T&Es. People make judgments about themselves when deciding what jobs to apply for in the first place. They describe themselves in certain ways during job interviews (when the motivation to make yourself look good is even stronger).
What can we do about it? Simply put, verify, verify, verify. If someone claims to be the greatest Java programmer on the planet, make them show you. If they claim to be a great orator, make an oral presentation part of the hiring process. Then talk to folks that know their work to establish a history of competence. Don't take someone's word at face value because (a) they may be trying to snow you, but more subtly (b) they may not know themselves.
By the way, Dr. Dunning is co-author of one of my all-time favorite articles, "Flawed self-assessment: Implications for health, education, and the workplace" which describes how the inability of people to judge themselves accurately can result in very serious problems.
Last thing: if you're not already familiar with it, check out the fundamental attribution error, which is one of the other big things our brain is constantly doing. It has huge implications for how we judge others.
Tuesday, November 13, 2007
Rigorous assessment pays off
It's great when the mainstream press gets assessment right. It doesn't happen a lot, so I want to make sure to point out a good example.
Ellen Simon (AP) devoted a recent article to employers that, even in a tight labor market, put job applicants through the paces.
Some of my favorite bits from the article:
- Employers that recognize their employees are an integral part of their brand. If your employees are unhappy, not trained, or otherwise a bad fit, customers (and potential applicants) notice.
- This quote from Rackspace Managed Hosting CEO Lanham Napier: "We'd rather miss a good one than hire a bad one." Without getting into Type I versus Type II errors, let me just say that Mr. Napier demonstrates the wisdom of someone who's seen what a bad hire can (or can't) do. (Check out their refreshingly simple career portal)
- The fact that Rackspace interviews last ALL DAY. Yep, all day. In this age of "I only have 30 minutes for the interview", that's darn refreshing.
- The wonderful use of realistic job preview videos by Lindblad Expeditions that show employees cleaning toilets and washing dishes. Says Kris Thompson, VP of HR, "If they get on board and say, 'This is not what I expected,' then shame on us." Check out how their online preview video combines push with pull.
I don't agree with everything in the article--I'm not a big fan of the idea of secretly judging people on their waiting room behavior--but all in all some great examples here to recognize.
(by the way, the HBR article Simon cites, called "Fool vs. Jerk: Whom Would you Hire?" is here.
Ellen Simon (AP) devoted a recent article to employers that, even in a tight labor market, put job applicants through the paces.
Some of my favorite bits from the article:
- Employers that recognize their employees are an integral part of their brand. If your employees are unhappy, not trained, or otherwise a bad fit, customers (and potential applicants) notice.
- This quote from Rackspace Managed Hosting CEO Lanham Napier: "We'd rather miss a good one than hire a bad one." Without getting into Type I versus Type II errors, let me just say that Mr. Napier demonstrates the wisdom of someone who's seen what a bad hire can (or can't) do. (Check out their refreshingly simple career portal)
- The fact that Rackspace interviews last ALL DAY. Yep, all day. In this age of "I only have 30 minutes for the interview", that's darn refreshing.
- The wonderful use of realistic job preview videos by Lindblad Expeditions that show employees cleaning toilets and washing dishes. Says Kris Thompson, VP of HR, "If they get on board and say, 'This is not what I expected,' then shame on us." Check out how their online preview video combines push with pull.
I don't agree with everything in the article--I'm not a big fan of the idea of secretly judging people on their waiting room behavior--but all in all some great examples here to recognize.
(by the way, the HBR article Simon cites, called "Fool vs. Jerk: Whom Would you Hire?" is here.
Labels:
Articles,
Best practices,
Interviews,
P-O fit,
Retention,
RJP,
Video
Wednesday, November 07, 2007
The changing requirement for war
The competencies required in the modern workplace change seemingly on a daily basis. Whereas computer skills were once a "nice to have" competency, now at least moderate computer proficiency is a "must have" for many (if not most) jobs. Whereas the ability to collaborate frequently with others may have been an infrequent requirement, now it is a prerequisite for many occupations.
Many times these changes are required to take advantage of new technologies and to keep up with competitors, and for the most part the transition works. On the other hand, sometimes we plan for one change in competencies when we should have gone another direction.
An excellent demonstration of this is the military. In this article from The Economist titled Armies of the future, the author describes the vision of former U.S. Defense Secretary Donald Rumsfeld:
The army's idea of its “future warrior” was a kind of cyborg, helmet stuffed with electronic wizardry and a computer display on his visor, all wirelessly linked to sensors, weapons and comrades. New clothing would have in-built heating and cooling. Information on the soldier's physical condition would be beamed to medics, and an artificial “exoskeleton” (a sort of personal brace) would strengthen his limbs.
At first, with initial successes in Afghanistan and Iraq, it seemed this vision would be validated. But as the nature of the warfare changed, so did soldier requirements. The current person in charge of the war in Iraq, General David Petraeus, has co-authored a new manual on counter-insurgency. According to the article:
Counter-insurgency, [the manual] says, is “armed social work”. It requires more brain than brawn, more patience than aggression. The model soldier should be less science-fiction Terminator and more intellectual for “the graduate level of war”, preferably a linguist, with a sense of history and anthropology.
This has huge implications for how the military will recruit, assess, and train soldiers, and we can argue about whether this shift could have been predicted or not, but there are some clear lessons here:
1 - Job analysis is important, and don't just do it once and put it in a file. Not that we needed any more evidence of this, but the example above vividly demonstrates how critical it is to carefully study a job (and keep studying it) to inform recruitment and assessment. Of course simply doing a job analysis doesn't guarantee success, as I'm sure there was a not-insignificant amount of thought that went into the "future warrior" idea.
2 - Look before you leap. Plan for how you will select people and train existing staff. The article doesn't go into this in depth, but one big implication for such a significant shift in competency requirements is what to do with existing staff. We've all been there--we implement a new software program, we require everyone to start writing or presenting more--and sometimes it works, sometimes it doesn't. Is our success due to the skill level of existing staff? Our careful planning? Or dumb luck?
When it comes to ensuring people are successful at what they do, fortunately we don't have to rely on luck. But we do have to devote time, resources, and careful attention to doing recruitment and assessment right. The consequences are important--whether they're saving lives or helping an organization be successful.
Many times these changes are required to take advantage of new technologies and to keep up with competitors, and for the most part the transition works. On the other hand, sometimes we plan for one change in competencies when we should have gone another direction.
An excellent demonstration of this is the military. In this article from The Economist titled Armies of the future, the author describes the vision of former U.S. Defense Secretary Donald Rumsfeld:
The army's idea of its “future warrior” was a kind of cyborg, helmet stuffed with electronic wizardry and a computer display on his visor, all wirelessly linked to sensors, weapons and comrades. New clothing would have in-built heating and cooling. Information on the soldier's physical condition would be beamed to medics, and an artificial “exoskeleton” (a sort of personal brace) would strengthen his limbs.
At first, with initial successes in Afghanistan and Iraq, it seemed this vision would be validated. But as the nature of the warfare changed, so did soldier requirements. The current person in charge of the war in Iraq, General David Petraeus, has co-authored a new manual on counter-insurgency. According to the article:
Counter-insurgency, [the manual] says, is “armed social work”. It requires more brain than brawn, more patience than aggression. The model soldier should be less science-fiction Terminator and more intellectual for “the graduate level of war”, preferably a linguist, with a sense of history and anthropology.
This has huge implications for how the military will recruit, assess, and train soldiers, and we can argue about whether this shift could have been predicted or not, but there are some clear lessons here:
1 - Job analysis is important, and don't just do it once and put it in a file. Not that we needed any more evidence of this, but the example above vividly demonstrates how critical it is to carefully study a job (and keep studying it) to inform recruitment and assessment. Of course simply doing a job analysis doesn't guarantee success, as I'm sure there was a not-insignificant amount of thought that went into the "future warrior" idea.
2 - Look before you leap. Plan for how you will select people and train existing staff. The article doesn't go into this in depth, but one big implication for such a significant shift in competency requirements is what to do with existing staff. We've all been there--we implement a new software program, we require everyone to start writing or presenting more--and sometimes it works, sometimes it doesn't. Is our success due to the skill level of existing staff? Our careful planning? Or dumb luck?
When it comes to ensuring people are successful at what they do, fortunately we don't have to rely on luck. But we do have to devote time, resources, and careful attention to doing recruitment and assessment right. The consequences are important--whether they're saving lives or helping an organization be successful.
Monday, November 05, 2007
Jobfox members look upward
Jobfox is a job site I've posted about before that is making an attempt to more accurately match candidates with employers. The idea is to allow candidates to describe themselves in detail, including their work preferences, then have employers seek them (hence their motto, "be the hunted.")
Speaking of work preferences, one of the features the Jobfox offers is the ability for candidates to select up to five features of a job that they value the most--things like 401k matching, unstructured environment, and work/life balance.
Since Jobfox has all this information on people, they recently posted an analysis of results of over 6,000 registered job seekers. The press release focuses on the dearth of "green" factors people are looking for (e.g., looking for a company that is ecologically friendly), but to me the take home is about career advancement. Take a look at the top six desired job qualities:
Advancement opportunity (55%)
More leadership responsibility (41%)
Work/life balance (38%)
Leadership that's respected/admired (36%)
Sense of accomplishment (36%)
Higher salary (28%)
Notice that half of these, including the two most popular, are related to moving up in an organization.
The other result of note has to do with another kind of green (in the U.S. at least). Look at where salary is--down at #6. This suggests (and smart organizations know) there are ways of attracting and retaining talented folks simply by offering ways for people to take on increased responsibility and leadership opportunities, or restructuring the job (which might also help with that sense of accomplishment).
Speaking of work preferences, one of the features the Jobfox offers is the ability for candidates to select up to five features of a job that they value the most--things like 401k matching, unstructured environment, and work/life balance.
Since Jobfox has all this information on people, they recently posted an analysis of results of over 6,000 registered job seekers. The press release focuses on the dearth of "green" factors people are looking for (e.g., looking for a company that is ecologically friendly), but to me the take home is about career advancement. Take a look at the top six desired job qualities:
Advancement opportunity (55%)
More leadership responsibility (41%)
Work/life balance (38%)
Leadership that's respected/admired (36%)
Sense of accomplishment (36%)
Higher salary (28%)
Notice that half of these, including the two most popular, are related to moving up in an organization.
The other result of note has to do with another kind of green (in the U.S. at least). Look at where salary is--down at #6. This suggests (and smart organizations know) there are ways of attracting and retaining talented folks simply by offering ways for people to take on increased responsibility and leadership opportunities, or restructuring the job (which might also help with that sense of accomplishment).
Wednesday, October 31, 2007
Links a go-go: Halloween Edition
Good reading for this October 31, 2007--enough to frighten us into paying renewed attention to our recruiting and assessment:
Testing without analysis...now that's scary!
It's scary, all the things that get in the way of job performance
What's that sound? Oh, just ATS reports running.
Be afraid...very afraid...of drop-down menus!
Hope your career portal isn't frighteningly bad
The number of job boards is truly monstrous
Testing without analysis...now that's scary!
It's scary, all the things that get in the way of job performance
What's that sound? Oh, just ATS reports running.
Be afraid...very afraid...of drop-down menus!
Hope your career portal isn't frighteningly bad
The number of job boards is truly monstrous
Monday, October 29, 2007
October '07 ACN
The October 2007 issue of the Assessment Council News is out with two great articles:
First, Dr. Mike Aamodt tackles the issue of validity co-efficients with Beauty May Be in the Eye of the Beholder, but is the Same True of a Validity Coefficient? (begins page 2)
In the article, Dr. Aamodt gathers data from experts in the assessment community on questions such as:
- Is there a minimum value for a validity coefficient that would generally be accepted by testing experts? If so, what is it?
(includes a great table summarizing where certain validity coefficient values have been referenced)
- What is the lowest uncorrected validity coefficient that you believe would indicate that an inference from a test has acceptable criterion validity?
- If a validity coefficient is statistically significant, is that enough to imply job relatedness?
The second article is by Natasha Riley and covers a topic near and dear to our hearts--Unproctored Internet Testing--The Technological Edge: Panacea or Pandora's Box? (page 11)
If the title seems a little odd and/or familiar, it's because it's a combination of various presentation titles from the 2007 IPMAAC conference where unproctored Internet testing was a hot topic. In the article, Natasha covers some pros and cons of this type of testing and describes how Riverside County in California is having some success with it.
Remember, internet-based testing does not have to be about selecting out. It can be about giving candidates tools they can use to determine whether they would be a good fit. Cheating is removed as an obstacle when we eliminate the motivation. Consider giving the applicant the "exam" and have them determine whether they want to move forward given their results and how they compare to successful job incumbents.
Anyway, kudos to IPMAAC for another illuminating issue of ACN. Keep 'em comin', and folks watch out for the call for proposals for the 2008 IPMAAC conference! (to be announced shortly)
First, Dr. Mike Aamodt tackles the issue of validity co-efficients with Beauty May Be in the Eye of the Beholder, but is the Same True of a Validity Coefficient? (begins page 2)
In the article, Dr. Aamodt gathers data from experts in the assessment community on questions such as:
- Is there a minimum value for a validity coefficient that would generally be accepted by testing experts? If so, what is it?
(includes a great table summarizing where certain validity coefficient values have been referenced)
- What is the lowest uncorrected validity coefficient that you believe would indicate that an inference from a test has acceptable criterion validity?
- If a validity coefficient is statistically significant, is that enough to imply job relatedness?
The second article is by Natasha Riley and covers a topic near and dear to our hearts--Unproctored Internet Testing--The Technological Edge: Panacea or Pandora's Box? (page 11)
If the title seems a little odd and/or familiar, it's because it's a combination of various presentation titles from the 2007 IPMAAC conference where unproctored Internet testing was a hot topic. In the article, Natasha covers some pros and cons of this type of testing and describes how Riverside County in California is having some success with it.
Remember, internet-based testing does not have to be about selecting out. It can be about giving candidates tools they can use to determine whether they would be a good fit. Cheating is removed as an obstacle when we eliminate the motivation. Consider giving the applicant the "exam" and have them determine whether they want to move forward given their results and how they compare to successful job incumbents.
Anyway, kudos to IPMAAC for another illuminating issue of ACN. Keep 'em comin', and folks watch out for the call for proposals for the 2008 IPMAAC conference! (to be announced shortly)
Wednesday, October 24, 2007
October '07 TIP: Alternatives and Title VII
You legal buffs out there know that under Title VII of the Civil Rights Act of 1964 (as amended in 1991) there exists a "burden shifting" framework that lays out how an employment discrimination case (hypothetically) proceeds:
1 - The plaintiff must show that the employer is using a particular employment practice (e.g., a selection test) that results in disparate (or adverse) impact against a legally protected group; if successful,
2 - The employer must show that the practice was/is job related for the position in question and consistent with business necessity; if successful,
3 - The plaintiff must show that there is an alternative employment practice (e.g., a different type of test) that would serve the employer's needs, be equally valid, and have less adverse impact and the employer refuses to adopt it. The classic case is plaintiffs suing over a written knowledge test and suggesting a personality or performance test should have been used.
You may also know that plaintiffs rarely win employment lawsuits (for many reasons, but one of which is employers are getting better at #2 above), and there seems to be a shift toward the third prong of the case--showing that there are alternative testing mechanisms out there that are equally as valid and with less adverse impact.
The October issue of the Industrial-Organizational Psychologist (TIP) contains two articles (both by individuals who have served as expert witnesses in discrimination cases) that touch on this subject and are worth a read:
Slippery slope of "alternatives" altering the topography of employment testing? by James Sharf
and
Less adverse alternatives: Making progress and avoiding red herrings by James Outtz
Also in this issue, a great analysis of the recent U.S. Supreme Court ruling in Parents v. Seattle School District by Art Gutman and Eric Dunleavy that reviews in detail the current status of affirmative action.
1 - The plaintiff must show that the employer is using a particular employment practice (e.g., a selection test) that results in disparate (or adverse) impact against a legally protected group; if successful,
2 - The employer must show that the practice was/is job related for the position in question and consistent with business necessity; if successful,
3 - The plaintiff must show that there is an alternative employment practice (e.g., a different type of test) that would serve the employer's needs, be equally valid, and have less adverse impact and the employer refuses to adopt it. The classic case is plaintiffs suing over a written knowledge test and suggesting a personality or performance test should have been used.
You may also know that plaintiffs rarely win employment lawsuits (for many reasons, but one of which is employers are getting better at #2 above), and there seems to be a shift toward the third prong of the case--showing that there are alternative testing mechanisms out there that are equally as valid and with less adverse impact.
The October issue of the Industrial-Organizational Psychologist (TIP) contains two articles (both by individuals who have served as expert witnesses in discrimination cases) that touch on this subject and are worth a read:
Slippery slope of "alternatives" altering the topography of employment testing? by James Sharf
and
Less adverse alternatives: Making progress and avoiding red herrings by James Outtz
Also in this issue, a great analysis of the recent U.S. Supreme Court ruling in Parents v. Seattle School District by Art Gutman and Eric Dunleavy that reviews in detail the current status of affirmative action.
Monday, October 22, 2007
Looking far and wide
When it comes to finding talented individuals, how far and wide do you look? Are you as creative as you could be?
In a recent article James E. Challenger, of the outsourcing firm Challenger, Gray, and Christmas, described the results of a new study in which half of the 100 HR executives polled stated their companies work informally with former employees; 23% considered stay-at-home parents to be valuable recruiting targets. The goal? Finding folks that are experienced and easy to train.
What does your organization do when people leave? Does it go beyond getting a forwarding address? How about following a structured approach to keep track of talented individuals in case their next job fizzles?
Challenger cites Lehman Brothers as a leader in this area with its Encore program, which according to the website is "designed to facilitate networking and professional development opportunities for individuals interested in resuming their careers in the financial services industry. Ideal candidates are women and men, preferably with industry-related experience at a vice-president level or above, who have voluntarily left the workforce for a year or more."
Does your organization actively recruit people that have been out of the workforce for a year or more? Or are these people seen as "stale"?
The article also includes "resources for returning parents", including:
UCEAdirectory.org (searchable database of continuing education courses)
Meetup.com (real-world social networking)
Modernmom.com (advice on activities and work-life balance)
Showmomthemoney.com (money tips, degree links, and more)
Ladies Who Launch (networking and entrepreneurial advice)
Has your organization considered recruiting efforts that target these types of groups? Or is it hoping that qualified applicants find you?
Some things to think about as we all work on being more creative with reaching out to all qualified candidates. I bet there are a lot of folks out there that would love to see a list of employers willing to hire returning workers (as well those that are open to part-time arrangements).
In a recent article James E. Challenger, of the outsourcing firm Challenger, Gray, and Christmas, described the results of a new study in which half of the 100 HR executives polled stated their companies work informally with former employees; 23% considered stay-at-home parents to be valuable recruiting targets. The goal? Finding folks that are experienced and easy to train.
What does your organization do when people leave? Does it go beyond getting a forwarding address? How about following a structured approach to keep track of talented individuals in case their next job fizzles?
Challenger cites Lehman Brothers as a leader in this area with its Encore program, which according to the website is "designed to facilitate networking and professional development opportunities for individuals interested in resuming their careers in the financial services industry. Ideal candidates are women and men, preferably with industry-related experience at a vice-president level or above, who have voluntarily left the workforce for a year or more."
Does your organization actively recruit people that have been out of the workforce for a year or more? Or are these people seen as "stale"?
The article also includes "resources for returning parents", including:
UCEAdirectory.org (searchable database of continuing education courses)
Meetup.com (real-world social networking)
Modernmom.com (advice on activities and work-life balance)
Showmomthemoney.com (money tips, degree links, and more)
Ladies Who Launch (networking and entrepreneurial advice)
Has your organization considered recruiting efforts that target these types of groups? Or is it hoping that qualified applicants find you?
Some things to think about as we all work on being more creative with reaching out to all qualified candidates. I bet there are a lot of folks out there that would love to see a list of employers willing to hire returning workers (as well those that are open to part-time arrangements).
Wednesday, October 17, 2007
Increasing response rates to web surveys
I'm guessing I'm not the only one that has been using web surveys more and more (I'm a big fan of SurveyMonkey), and we'll probably all do more in the future. That's why this recent bit of research by Dr. Thomas Archer is so valuable. It's titled, "Characteristics associated with increasing the response rates of web-based surveys" and it's based on results from 99 various web-based surveys over a more than two-and-a-half year period using Zoomerang.
The results were somewhat surprising (to me at least). The length of the questionnaire wasn't particularly important in terms of response rate. This included both the number of open-ended questions and the length of rating scales. Instead the challenge is getting people to the survey in the first place.
How do we do that? The author recommends several strategies:
1 - Leave the questionnaire open for a while (say, three weeks), and send out a couple reminders along the way.
2 - Pay attention to how you write your survey invitations. They should be written at a low grade level in terms of readability.
3 - Make it clear to survey participants "what's in it for them" (e.g., you'll get a copy of the results).
If you haven't played around with web-based surveys, I'd encourage you to. They're very easy to learn and typically inexpensive.
I'd be interested in seeing if anyone out there is using web-based surveys as part of their recruitment/assessment process? Seems like a natural fit.
The results were somewhat surprising (to me at least). The length of the questionnaire wasn't particularly important in terms of response rate. This included both the number of open-ended questions and the length of rating scales. Instead the challenge is getting people to the survey in the first place.
How do we do that? The author recommends several strategies:
1 - Leave the questionnaire open for a while (say, three weeks), and send out a couple reminders along the way.
2 - Pay attention to how you write your survey invitations. They should be written at a low grade level in terms of readability.
3 - Make it clear to survey participants "what's in it for them" (e.g., you'll get a copy of the results).
If you haven't played around with web-based surveys, I'd encourage you to. They're very easy to learn and typically inexpensive.
I'd be interested in seeing if anyone out there is using web-based surveys as part of their recruitment/assessment process? Seems like a natural fit.
Saturday, October 13, 2007
Depressed workers
A new report by the U.S. Department of Health and Human Services' Substance Abuse and Mental Health Services Administration (SAMHSA; say that five times fast) states that an average of 7% of U.S. workers age 18 to 64 experienced a major depressive episode (MDE) in the past year. The data is based on surveys conducted of full-time workers in 2004 and 2006.
Importantly, the rate of MDEs varied by occupation, age, and gender:
- The highest rates of depression were found among personal care and service occupations (10.8%) and food preparation and serving related occupations (10.3%).
- The highest rate among female workers was in personal care and service (a whopping 14.8%). The highest rate among men was in arts, design, entertainment, sports, and media occupations (a much lower 6.7%).
- Lowest overall rates were found in engineering, architecture, and surveyors (4.3%), life, physical, and social sciences (4.4%), and installation, maintenance, and repair (4.4%).
- The highest rate among those age 18-25 was in healthcare practitioners and technical (11.9%); lowest was in life, physical, and social sciences (4.3%).
- The highest rates among those age 50-64 were in financial (9.8%) and personal care and service (9.7%) while the lowest rates were in sales and related (3.6%) and production (3.7%).
(Keep in mind when looking at occupational differences that women typically report higher levels of depression, and depression rates decrease with age)
Implications? Keep in mind these statistics if you have people in these occupations in your organization. Depression will impact productivity and your recruitment efforts, let alone any other processes or initiatives that require high levels of employee energy and involvement. Wellness efforts specifically targeting individuals in these occupations may pay bigger dividends.
The report includes the fact that U.S. companies lose an estimated $30-40 billion dollars a year due to employee depression.
Importantly, the rate of MDEs varied by occupation, age, and gender:
- The highest rates of depression were found among personal care and service occupations (10.8%) and food preparation and serving related occupations (10.3%).
- The highest rate among female workers was in personal care and service (a whopping 14.8%). The highest rate among men was in arts, design, entertainment, sports, and media occupations (a much lower 6.7%).
- Lowest overall rates were found in engineering, architecture, and surveyors (4.3%), life, physical, and social sciences (4.4%), and installation, maintenance, and repair (4.4%).
- The highest rate among those age 18-25 was in healthcare practitioners and technical (11.9%); lowest was in life, physical, and social sciences (4.3%).
- The highest rates among those age 50-64 were in financial (9.8%) and personal care and service (9.7%) while the lowest rates were in sales and related (3.6%) and production (3.7%).
(Keep in mind when looking at occupational differences that women typically report higher levels of depression, and depression rates decrease with age)
Implications? Keep in mind these statistics if you have people in these occupations in your organization. Depression will impact productivity and your recruitment efforts, let alone any other processes or initiatives that require high levels of employee energy and involvement. Wellness efforts specifically targeting individuals in these occupations may pay bigger dividends.
The report includes the fact that U.S. companies lose an estimated $30-40 billion dollars a year due to employee depression.
Thursday, October 11, 2007
September '07 issue of J.A.P.
The September 2007 Journal of Applied Psychology is out and it's got some research we need to look at...
First, Hogan, Barrett, and Hogan present the results of a study of over 5,000 job applicants who took a 5-factor personality test on two separate occasions (6 months apart). Only 5% of applicants improved their score on the second administration, and scores were equally as likely to change in a negative direction as a positive one. The authors suggest that given these results, faking on personality measures is not a significant issue in real-world settings. Comment: This is faking defined as improving your score to match the job, not faking as misrepresenting yourself consistently over time. Also, I can't help but think of the recent article by Morgeson et al. in Personnel Psych where they argued that faking isn't the issue; it's the low validities we should be concerned about.
Next, De Corte, Lievens, and Sackett present a procedure designed to determine predictor composites (i.e., how much each testing method should be "worth") that optimize the trade-off between validity and adverse impact. The procedure is tested with various scenarios with positive results. You can actually download the executable code here and an in-press version of the article is here (thank you, Drs. De Corte and Lievens!).
Speaking of validity, the next article of interest is by Newman, Jacobs, and Bartram, and looks at the relative accuracy of three techniques for estimating validity and adverse impact (local validity studies, meta-analysis, and Bayesian analysis). The authors describe which method is optimal in different conditions, using measures of cognitive ability and conscientiousness as predictors. They even toss in recommendations for how to estimate local parameters.
Next, a fascinating and useful study of counterproductive work behaviors (CWBs; things like theft, sabotage, or assault) by Roberts, Harms, Caspi, and Moffitt that tackles the issue from a developmental perspective. Using data from a 23-year longitudinal study of 930 individuals, the authors found that individuals diagnosed with childhood conduct disorder were more likely to commit CWBs as young adults. On the other hand, criminal convictions occurring at a young age were unrelated to CWBs demonstrated later on. Job conditions and personality traits had their own impact on CWBs, above and beyond background factors. Great stuff, especially for those of you with a particular interest in biodata and/or background checks.
Last but not least, a study of person-organization (P-O) fit by Resick, Baltes, and Shantz. Using data from 299 participants in a 12-week internship program, the authors found that the relationship between P-O fit on the one hand and job satisfaction, job choice, and job offer acceptance on the other depends on the type of fit (needs-supplies vs. demands-abilities) as well as the conscientiousness of the individual. Good food for thought when thinking about P-O fit, a consistently popular concept.
Honorable mention: This meta-analysis by Humphrey, Nahrgang, and Morgeson of 259 studies that investigated work design impacts on worker attitudes and behaviors. Think behavior is determined solely by individual ability and disposition? Ya might want to take a gander at this study; it'll change your tune. A great reminder that satisfaction and performance are the result of both the individual and his/her work environment. Also available here.
First, Hogan, Barrett, and Hogan present the results of a study of over 5,000 job applicants who took a 5-factor personality test on two separate occasions (6 months apart). Only 5% of applicants improved their score on the second administration, and scores were equally as likely to change in a negative direction as a positive one. The authors suggest that given these results, faking on personality measures is not a significant issue in real-world settings. Comment: This is faking defined as improving your score to match the job, not faking as misrepresenting yourself consistently over time. Also, I can't help but think of the recent article by Morgeson et al. in Personnel Psych where they argued that faking isn't the issue; it's the low validities we should be concerned about.
Next, De Corte, Lievens, and Sackett present a procedure designed to determine predictor composites (i.e., how much each testing method should be "worth") that optimize the trade-off between validity and adverse impact. The procedure is tested with various scenarios with positive results. You can actually download the executable code here and an in-press version of the article is here (thank you, Drs. De Corte and Lievens!).
Speaking of validity, the next article of interest is by Newman, Jacobs, and Bartram, and looks at the relative accuracy of three techniques for estimating validity and adverse impact (local validity studies, meta-analysis, and Bayesian analysis). The authors describe which method is optimal in different conditions, using measures of cognitive ability and conscientiousness as predictors. They even toss in recommendations for how to estimate local parameters.
Next, a fascinating and useful study of counterproductive work behaviors (CWBs; things like theft, sabotage, or assault) by Roberts, Harms, Caspi, and Moffitt that tackles the issue from a developmental perspective. Using data from a 23-year longitudinal study of 930 individuals, the authors found that individuals diagnosed with childhood conduct disorder were more likely to commit CWBs as young adults. On the other hand, criminal convictions occurring at a young age were unrelated to CWBs demonstrated later on. Job conditions and personality traits had their own impact on CWBs, above and beyond background factors. Great stuff, especially for those of you with a particular interest in biodata and/or background checks.
Last but not least, a study of person-organization (P-O) fit by Resick, Baltes, and Shantz. Using data from 299 participants in a 12-week internship program, the authors found that the relationship between P-O fit on the one hand and job satisfaction, job choice, and job offer acceptance on the other depends on the type of fit (needs-supplies vs. demands-abilities) as well as the conscientiousness of the individual. Good food for thought when thinking about P-O fit, a consistently popular concept.
Honorable mention: This meta-analysis by Humphrey, Nahrgang, and Morgeson of 259 studies that investigated work design impacts on worker attitudes and behaviors. Think behavior is determined solely by individual ability and disposition? Ya might want to take a gander at this study; it'll change your tune. A great reminder that satisfaction and performance are the result of both the individual and his/her work environment. Also available here.
Monday, October 08, 2007
Checkster and SkillSurvey automate reference checking
When it comes to things that supervisors (and, frankly, HR) don't look forward to, reference checking probably ranks in the top 5. Checking references is time consuming and difficult to do well, because many references refuse to do more than confirm name, job title, salary, and employment dates for fear of getting sued. This is unfortunate, since lawsuits in this area are quite rare and reference checks can be a great source of information.
So it was with no small amount of excitement that I discovered two web-based services that are automating the reference check process--Checkster and SkillSurvey. The basic idea behind these services is that a brief survey consisting of both rating scales and open-ended questions is sent out electronically to references; responses to these surveys (generally at least 3) are combined by the services into an overall report for the employer. While the services are substantially similar, in this post I'll give you a brief overview of each.
Checkster
Checkster is the brainchild of CEO Yves Lermusi, formerly of Taleo (and frequent contributor to ERE). Lermusi noticed that the frequency and quality of the performance feedback most people receive drops dramatically when they move from school to work, making it difficult for people to understand their strengths and areas for development. To help remedy this, he developed Checkster to be a "personal feedback management tool"--a focus that he says distinguishes it from other services, whose bread and butter is employer-based reference checking. Applicants receive the results of the reference check, just as the employer does, with the idea that this information will be used to help people develop and make better decisions regarding their career.
With Checkster, the employer simply enters the name and e-mail address of each applicant along with the requisition and selects the type of survey to be delivered (Checkster also has a 360-degree survey). That's it. Simple, eh? The applicant takes it from there, logging into Checkster and entering reference names and contact information. References have 7 days to take the quick and confidential survey, and Checkster compiles the resulting information into a report after at least three responses have been collected. From the employer's side, a simple account screen allows you to manage your requisitions and see the status of each. You can see an overview of how it works here, and watch a demo here that includes pictures of a report.
Big bonus: Checkster also has a free employment verification feature which will send an e-mail to previous employers to verify dates of employment, reporting structure, compensation, and eligibility for rehire.
Price: $50 per requisition, which allows you to check references for up to five candidates with a maximum of 15 references per candidate (volume discounts are also available).
SkillSurvey
As I mentioned, SkillSurvey and Checkster work in a similar fashion--the employer enters candidate information, the candidate enters reference information (or the employer can), references evaluate the candidate, and SkillSurvey generates the report.
Differences between Checkster and SkillSurvey that I observed:
1) SkillSurvey allows you to choose different types of surveys depending on the job. Each includes competencies developed by SkillSurvey staff. For example, there are different surveys for sales positions, IT positions, and HR positions (click here for an example for Marketing Manager).
2) Each point on SkillSurvey's rating scale is anchored, which could potentially lead to better reliability.
3) SkillSurvey reports are not automatically available to the candidate (unlike Checkster)--this reflects the emphasis that SkillSurvey places on being primarily a tool for the employer, versus Checkster's focus on individual development.
4) SkillSurvey has a sourcing component built in--you can download a spreadsheet that contains all the information on reference-givers that you can sort and use to identify applicants (very cool).
5) Checkster's reports are a little shorter and more graphical, while SkillSurvey reports are more text-based and extensive.
6) In terms of customization, SkillSurvey offers many options for altering things like turnaround time, and even weighting questions.
7) The actual text that goes out from candidates is easier to modify using Checkster.
A SkillSurvey overview video with screens of surveys and reports is available here, and sample reports are here. They even have a blog written by Doug LaPasta, their founder and chairman.
Price: $59 for one candidate with significant discounts for volume; usually charged in units of 100 candidates. The employer controls the number of reference givers required for completion of the report (anywhere from 2-15).
By the way, SkillSurvey was selected as a Top HR Product for 2007 by Human Resource Executive.
Summary
Both products have the potential to dramatically decrease the amount of time spent checking references, and have the added benefits of standardization as well as indicating an affinity for technology. Both companies have taken steps to ensure reference givers feel comfortable giving out information. The information may also be of higher quality since the process is being handled by a third party.
Both services were extremely easy to use. I found representatives from both companies to be knowledgeable and helpful. I'm sure as both products mature we'll see great additions, including hopefully an increased ability to gather off-list checks and even more options for tailoring the surveys.
Some things to keep in mind: (1) Like all mass-mailing type services, make sure e-mails from these companies don't get blocked by firewalls; and (2) Because some candidates may provide false references, do periodic spot checks (e.g., by verifying name & e-mail address).
I hope this has peaked your interest; I suggest checking out both products to see if either would make your life easier!
So it was with no small amount of excitement that I discovered two web-based services that are automating the reference check process--Checkster and SkillSurvey. The basic idea behind these services is that a brief survey consisting of both rating scales and open-ended questions is sent out electronically to references; responses to these surveys (generally at least 3) are combined by the services into an overall report for the employer. While the services are substantially similar, in this post I'll give you a brief overview of each.
Checkster
Checkster is the brainchild of CEO Yves Lermusi, formerly of Taleo (and frequent contributor to ERE). Lermusi noticed that the frequency and quality of the performance feedback most people receive drops dramatically when they move from school to work, making it difficult for people to understand their strengths and areas for development. To help remedy this, he developed Checkster to be a "personal feedback management tool"--a focus that he says distinguishes it from other services, whose bread and butter is employer-based reference checking. Applicants receive the results of the reference check, just as the employer does, with the idea that this information will be used to help people develop and make better decisions regarding their career.
With Checkster, the employer simply enters the name and e-mail address of each applicant along with the requisition and selects the type of survey to be delivered (Checkster also has a 360-degree survey). That's it. Simple, eh? The applicant takes it from there, logging into Checkster and entering reference names and contact information. References have 7 days to take the quick and confidential survey, and Checkster compiles the resulting information into a report after at least three responses have been collected. From the employer's side, a simple account screen allows you to manage your requisitions and see the status of each. You can see an overview of how it works here, and watch a demo here that includes pictures of a report.
Big bonus: Checkster also has a free employment verification feature which will send an e-mail to previous employers to verify dates of employment, reporting structure, compensation, and eligibility for rehire.
Price: $50 per requisition, which allows you to check references for up to five candidates with a maximum of 15 references per candidate (volume discounts are also available).
SkillSurvey
As I mentioned, SkillSurvey and Checkster work in a similar fashion--the employer enters candidate information, the candidate enters reference information (or the employer can), references evaluate the candidate, and SkillSurvey generates the report.
Differences between Checkster and SkillSurvey that I observed:
1) SkillSurvey allows you to choose different types of surveys depending on the job. Each includes competencies developed by SkillSurvey staff. For example, there are different surveys for sales positions, IT positions, and HR positions (click here for an example for Marketing Manager).
2) Each point on SkillSurvey's rating scale is anchored, which could potentially lead to better reliability.
3) SkillSurvey reports are not automatically available to the candidate (unlike Checkster)--this reflects the emphasis that SkillSurvey places on being primarily a tool for the employer, versus Checkster's focus on individual development.
4) SkillSurvey has a sourcing component built in--you can download a spreadsheet that contains all the information on reference-givers that you can sort and use to identify applicants (very cool).
5) Checkster's reports are a little shorter and more graphical, while SkillSurvey reports are more text-based and extensive.
6) In terms of customization, SkillSurvey offers many options for altering things like turnaround time, and even weighting questions.
7) The actual text that goes out from candidates is easier to modify using Checkster.
A SkillSurvey overview video with screens of surveys and reports is available here, and sample reports are here. They even have a blog written by Doug LaPasta, their founder and chairman.
Price: $59 for one candidate with significant discounts for volume; usually charged in units of 100 candidates. The employer controls the number of reference givers required for completion of the report (anywhere from 2-15).
By the way, SkillSurvey was selected as a Top HR Product for 2007 by Human Resource Executive.
Summary
Both products have the potential to dramatically decrease the amount of time spent checking references, and have the added benefits of standardization as well as indicating an affinity for technology. Both companies have taken steps to ensure reference givers feel comfortable giving out information. The information may also be of higher quality since the process is being handled by a third party.
Both services were extremely easy to use. I found representatives from both companies to be knowledgeable and helpful. I'm sure as both products mature we'll see great additions, including hopefully an increased ability to gather off-list checks and even more options for tailoring the surveys.
Some things to keep in mind: (1) Like all mass-mailing type services, make sure e-mails from these companies don't get blocked by firewalls; and (2) Because some candidates may provide false references, do periodic spot checks (e.g., by verifying name & e-mail address).
I hope this has peaked your interest; I suggest checking out both products to see if either would make your life easier!
Thursday, October 04, 2007
OPM Introduces Assessment Decision Tool
The U.S. Office of Personnel Management (OPM) continues to innovate in the area of web-based assessment offerings with the introduction of the Assessment Decision Tool.
This interactive application allows you to enter competencies required for a particular position for seven occupational groups, including:
- Clerical/technical
- Human resources
- IT
- Leadership occupations
- Professional/administrative
- Science/engineering
- Trades/labor
After entering in this information, the system will create a matrix that identifies which assessment tools would be the best fit for each competency, then gives you totals so you can see which methods might be best overall. These matches were identified by a panel of OPM psychologists who specialize in personnel assessment.
The final step is the generation of a comprehensive report that summarizes the position competencies, assessment matches, and information about each assessment method. There is also a reference section for those wishing to investigate further. The report can be downloaded in HTML format and OPM is working on PDF functionality as well.
A great idea that automates competency-exam linkages that many assessment professionals do routinely. Now if they can just have the system create the actual tests...
This interactive application allows you to enter competencies required for a particular position for seven occupational groups, including:
- Clerical/technical
- Human resources
- IT
- Leadership occupations
- Professional/administrative
- Science/engineering
- Trades/labor
After entering in this information, the system will create a matrix that identifies which assessment tools would be the best fit for each competency, then gives you totals so you can see which methods might be best overall. These matches were identified by a panel of OPM psychologists who specialize in personnel assessment.
The final step is the generation of a comprehensive report that summarizes the position competencies, assessment matches, and information about each assessment method. There is also a reference section for those wishing to investigate further. The report can be downloaded in HTML format and OPM is working on PDF functionality as well.
A great idea that automates competency-exam linkages that many assessment professionals do routinely. Now if they can just have the system create the actual tests...
Monday, October 01, 2007
Links a go-go for October 1, 2007
Good reading for October 1, 2007
The new affirmative action (about schools, but lessons for employers)
2007 ILG National Conference Highlights
Don't automatically dismiss people that been fired
Court rules EEOC may proceed with discrimination case against L.A. Weight Loss
Visa and using credit scores in the hiring process
Hiring supervisors and leaders (the #1 problem of most organizations, IMHO)
Deloitte demonstrates just how creative recruiting can be
How many names does it take to get to a hire?
Who does The Gap think it is? Monster?
The new affirmative action (about schools, but lessons for employers)
2007 ILG National Conference Highlights
Don't automatically dismiss people that been fired
Court rules EEOC may proceed with discrimination case against L.A. Weight Loss
Visa and using credit scores in the hiring process
Hiring supervisors and leaders (the #1 problem of most organizations, IMHO)
Deloitte demonstrates just how creative recruiting can be
How many names does it take to get to a hire?
Who does The Gap think it is? Monster?
Thursday, September 27, 2007
Autumn 2007 Personnel Psychology
The Autumn 2007 issue of Personnel Psychology is out with plenty for us to sink our teeth into, particularly for you personality testing fans out there. Let's take a look:
First up, Luthans et al. present the results of a study that focuses on positive psychology, which is gaining more and more interest these days. The authors describe support for a survey instrument that purports to measure four aspects of "positive psychological capital"--hope, resilience, optimism, and efficacy--and then looked at whether results predicted job performance and satisfaction. Results? A "significant positive relationship", with the composite of the four aspects outperforming each individually. (Side note: two of the authors published a book last year that focuses on this topic)
Next, Judge & Erez look at how two of the Big 5 personality dimensions--emotional stability and extraversion--predicted job performance at a health and fitness center. Not only did both predict performance on their own, but they did even better in combination. The authors suggest that the combination of emotional stability and extraversion reflects a "happy" or "buoyant" personality that may be more important to predicting performance than each trait in isolation. Great study that goes beyond the "which of the Big 5 are the best" mentality.
Next up, Buckley et al. with a study of race and interview panels. Ten White and ten Black raters viewed videotaped responses of 36 White and 36 Black police officers applying for a promotion. Results? Well, there's good news and bad news. The bad news is that there was a same race bias (i.e., White raters rated White applicants better, Black raters rated Black applicants better) and a significant difference between the panels depending on the ethnic makeup. Good news? The effect size was small and "net reconciliation" (the difference between initial and final scores) was significant (but small) only among Black raters.
Pay attention to the next study, recruiters: Zhao et al. present the results of a meta-analysis on the impact of psychological contract breach on 8 work-related outcomes, including attitude and individual effectiveness. An example of contract breach: telling an applicant you have great work-life balance policies and then never approving leave. So what did they find? Breach was related to all eight outcomes except for actual turnover. Affect mediated this relationship, which suggests to me that if you have to break a contract, you may be able to somewhat manage the impact by being smart about how you present it and being sensitive about the reaction.
Next up, a bevy of big names in the field (let's just call them "Morgeson, et al.") drop a bombshell on personality testing: they argue that because of the low validities associated with self-report personality measures, they should be discontinued for personnel selection! They don't write personality tests off completely, but suggest that alternatives to self-report measures need to be developed (someone may want to tell Judge & Erez; see article above). What might this look like? Conditional reasoning tests are mentioned as a possibility. And, (this is just me talkin') "ability" type measures could be developed (e.g., if you're conscientious you should be able to demonstrate certain behaviors) or we could integrate personality measurement into the reference checking process (hey, I didn't say it would be easy). Oh and hey, here's the article if you're interested; thanks to Dr. Morgeson for making so much of his work available.
Ironically (or is it coincidentally? curse you, Alanis Morissette), the very next article is about the development of a new self-report personality measure, the Five Factor Model Questionnaire. Gill & Hodgkinson criticize existing measures (e.g., they contain too many generic items, they use culture-specific language) and find support for their measure using five separate diverse samples, including close convergent and divergent validity with the NEO PI-R.
So that's the end of the research articles, but not the end of this journal issue. It also contains reviews of several books, including:
- Using individual assessments in the workplace: A practical guide for HR professionals, trainers, and managers by Goodstein and Prien (which looks to be a very useful introductory guide, along the lines of Aamodt et al.'s statistics book)
- Foundations of psychological testing: A practical approach (2nd ed.) by McIntire and Miller, which is designed for an undergraduate-level course.
- and for those of you looking for something a little more advanced, a review is also included of Dr. Viswanathan's Measurement error and research design.
First up, Luthans et al. present the results of a study that focuses on positive psychology, which is gaining more and more interest these days. The authors describe support for a survey instrument that purports to measure four aspects of "positive psychological capital"--hope, resilience, optimism, and efficacy--and then looked at whether results predicted job performance and satisfaction. Results? A "significant positive relationship", with the composite of the four aspects outperforming each individually. (Side note: two of the authors published a book last year that focuses on this topic)
Next, Judge & Erez look at how two of the Big 5 personality dimensions--emotional stability and extraversion--predicted job performance at a health and fitness center. Not only did both predict performance on their own, but they did even better in combination. The authors suggest that the combination of emotional stability and extraversion reflects a "happy" or "buoyant" personality that may be more important to predicting performance than each trait in isolation. Great study that goes beyond the "which of the Big 5 are the best" mentality.
Next up, Buckley et al. with a study of race and interview panels. Ten White and ten Black raters viewed videotaped responses of 36 White and 36 Black police officers applying for a promotion. Results? Well, there's good news and bad news. The bad news is that there was a same race bias (i.e., White raters rated White applicants better, Black raters rated Black applicants better) and a significant difference between the panels depending on the ethnic makeup. Good news? The effect size was small and "net reconciliation" (the difference between initial and final scores) was significant (but small) only among Black raters.
Pay attention to the next study, recruiters: Zhao et al. present the results of a meta-analysis on the impact of psychological contract breach on 8 work-related outcomes, including attitude and individual effectiveness. An example of contract breach: telling an applicant you have great work-life balance policies and then never approving leave. So what did they find? Breach was related to all eight outcomes except for actual turnover. Affect mediated this relationship, which suggests to me that if you have to break a contract, you may be able to somewhat manage the impact by being smart about how you present it and being sensitive about the reaction.
Next up, a bevy of big names in the field (let's just call them "Morgeson, et al.") drop a bombshell on personality testing: they argue that because of the low validities associated with self-report personality measures, they should be discontinued for personnel selection! They don't write personality tests off completely, but suggest that alternatives to self-report measures need to be developed (someone may want to tell Judge & Erez; see article above). What might this look like? Conditional reasoning tests are mentioned as a possibility. And, (this is just me talkin') "ability" type measures could be developed (e.g., if you're conscientious you should be able to demonstrate certain behaviors) or we could integrate personality measurement into the reference checking process (hey, I didn't say it would be easy). Oh and hey, here's the article if you're interested; thanks to Dr. Morgeson for making so much of his work available.
Ironically (or is it coincidentally? curse you, Alanis Morissette), the very next article is about the development of a new self-report personality measure, the Five Factor Model Questionnaire. Gill & Hodgkinson criticize existing measures (e.g., they contain too many generic items, they use culture-specific language) and find support for their measure using five separate diverse samples, including close convergent and divergent validity with the NEO PI-R.
So that's the end of the research articles, but not the end of this journal issue. It also contains reviews of several books, including:
- Using individual assessments in the workplace: A practical guide for HR professionals, trainers, and managers by Goodstein and Prien (which looks to be a very useful introductory guide, along the lines of Aamodt et al.'s statistics book)
- Foundations of psychological testing: A practical approach (2nd ed.) by McIntire and Miller, which is designed for an undergraduate-level course.
- and for those of you looking for something a little more advanced, a review is also included of Dr. Viswanathan's Measurement error and research design.
Monday, September 24, 2007
Are individuals liable for employment discrimination?
A common question I hear from supervisors and HR professionals is: "Am I personally liable for employment discrimination when I make a hiring decision?"
This recent article deals with a California Supreme Court decision but covers the answer to this question generally.*
* Short answer: it's rare (except for Section 1981 or 1983 claims** and failing to verify employment eligibility***) but you may be named anyway as a tactic on the part of the plaintiff.
** Which can be particularly nasty since there is no cap on damages and no administrative requirement (like filing with the EEOC). On the other hand it is more difficult for plaintiffs to prevail in these cases, and it's only relevant in cases of disparate treatment.
*** Okay, this might be nastier because you could face jail time. Don't forget those I-9s!
This recent article deals with a California Supreme Court decision but covers the answer to this question generally.*
* Short answer: it's rare (except for Section 1981 or 1983 claims** and failing to verify employment eligibility***) but you may be named anyway as a tactic on the part of the plaintiff.
** Which can be particularly nasty since there is no cap on damages and no administrative requirement (like filing with the EEOC). On the other hand it is more difficult for plaintiffs to prevail in these cases, and it's only relevant in cases of disparate treatment.
*** Okay, this might be nastier because you could face jail time. Don't forget those I-9s!
Monday, September 17, 2007
September '07 Issue of JOOP
The September, 2007 issue of the Journal of Occupational and Organizational Psychology is out and has several articles worth taking a look at. Let's look at some of them:
The first article, by Ng et al., presents an overview of the different theories of job mobility. Specifically, they look at the impact of "structural" factors (e.g., the economy), individual differences, and decisional factors (e.g., readiness for change, desirability of the move). Good stuff to keep in mind when thinking about why people get and change jobs.
Next, Kenny and Briner provide an overview of 54 years worth of British research on ethnicity and behavior. A very broad article that includes discussion of research on recruitment/assessment (draft here).
Third, a fascinating study of the impact of job insecurity on behavior by Probst, et al. Using data gathered from both students and employees, the authors found that perceptions of job insecurity tended to have a negative impact on creativity (I'm thinkin' because your brain's busy thinking about the upcoming unemployment) but seems to have a moderately positive impact on productivity ("maybe if I work hard enough they won't fire me"?).
Next up, Hattrup, Mueller, and Aguirre analyzed data from the International Social Survey Programme on work value importance across 25 different nations. The authors found that conclusions about cross-cultural differences in work values will vary depending on how "work values" are operationalized. Why is this important? Because oftentimes sweeping statements are made about how people in certain countries view work-life balance, the importance of job security, interesting work, etc. This research reminds us to pause before adopting those conclusions.
Last but not least, Lapierre and Hackett present findings from a meta-analytic structural equation modeling study of conscientiousness, organizational citizenship behaviors (OCBs), job satisfaction, and leader-member exchange. If this makes you say, "Huh?" then here's the bottom line: (with this data at least) conscientious employees demonstrated more OCBs, which enhanced the supervisor-subordinate relationship, leading to greater job satisfaction. Job satisfaction also seemed to result in more demonstration of OCBs. More evidence to support the value of assessing for conscientiousness, methinks. Also more support for expanding the measure of recruitment/assessment success beyond simply "productivity."
The first article, by Ng et al., presents an overview of the different theories of job mobility. Specifically, they look at the impact of "structural" factors (e.g., the economy), individual differences, and decisional factors (e.g., readiness for change, desirability of the move). Good stuff to keep in mind when thinking about why people get and change jobs.
Next, Kenny and Briner provide an overview of 54 years worth of British research on ethnicity and behavior. A very broad article that includes discussion of research on recruitment/assessment (draft here).
Third, a fascinating study of the impact of job insecurity on behavior by Probst, et al. Using data gathered from both students and employees, the authors found that perceptions of job insecurity tended to have a negative impact on creativity (I'm thinkin' because your brain's busy thinking about the upcoming unemployment) but seems to have a moderately positive impact on productivity ("maybe if I work hard enough they won't fire me"?).
Next up, Hattrup, Mueller, and Aguirre analyzed data from the International Social Survey Programme on work value importance across 25 different nations. The authors found that conclusions about cross-cultural differences in work values will vary depending on how "work values" are operationalized. Why is this important? Because oftentimes sweeping statements are made about how people in certain countries view work-life balance, the importance of job security, interesting work, etc. This research reminds us to pause before adopting those conclusions.
Last but not least, Lapierre and Hackett present findings from a meta-analytic structural equation modeling study of conscientiousness, organizational citizenship behaviors (OCBs), job satisfaction, and leader-member exchange. If this makes you say, "Huh?" then here's the bottom line: (with this data at least) conscientious employees demonstrated more OCBs, which enhanced the supervisor-subordinate relationship, leading to greater job satisfaction. Job satisfaction also seemed to result in more demonstration of OCBs. More evidence to support the value of assessing for conscientiousness, methinks. Also more support for expanding the measure of recruitment/assessment success beyond simply "productivity."
Friday, September 07, 2007
A hiatus and government blogging
I'll be taking a brief hiatus from blogging as I move from the Pacific Northwest to California. There's plenty more blogging to come, it just may be a few weeks as I get settled in.
In the meantime, for those of you interested in learning more about blogs--how to make them and how to use them--you should check out an IBM study that came out recently titled The Blogging Revolution: Government in the Age of Web 2.0 by David Wyld. It's chock full of info, and not just for those of you in the public sector. Topics include:
- How do I blog?
- Touring the blogosphere
- Blogging policy
If this is a topic that interests you, don't forget to check out Scoble & Israel's Naked Conversations: How Blogs are Changing the Way Businesses Talk with Customers.
Oh, and if you look at the bottom of my homepage you might just see a link to an article that a certain someone (okay, me) wrote recently about how to use blogs for recruitment, assessment, and retention.
Thanks for reading & I'll be back soon!
In the meantime, for those of you interested in learning more about blogs--how to make them and how to use them--you should check out an IBM study that came out recently titled The Blogging Revolution: Government in the Age of Web 2.0 by David Wyld. It's chock full of info, and not just for those of you in the public sector. Topics include:
- How do I blog?
- Touring the blogosphere
- Blogging policy
If this is a topic that interests you, don't forget to check out Scoble & Israel's Naked Conversations: How Blogs are Changing the Way Businesses Talk with Customers.
Oh, and if you look at the bottom of my homepage you might just see a link to an article that a certain someone (okay, me) wrote recently about how to use blogs for recruitment, assessment, and retention.
Thanks for reading & I'll be back soon!
Tuesday, September 04, 2007
The Corporate Leavers Survey
This just in from the Level Playing Field Institute: a new study, sponsored by Korn/Ferry, that finds that corporate unfairness, in the form of "every-day inappropriate behaviors such as stereotyping, public humiliation and promoting based upon personal characteristics" costs U.S. employers $64 billion annually.
This sum, based on survey responses from 1,700 professionals and managers, is an estimate of "the cost of losing and replacing professionals and managers who leave their employers solely due to workplace unfairness. By adding in those for whom unfairness was a major contributor to their decision to leave, the figure is substantially greater."
Examples of the type of behavior they're talking about:
- the Arab telecommunications professional who, upon returning from visiting family in Iraq, is asked by a manager if he participated in any terrorism
- the African-American lawyer who is mistaken THREE TIMES for a different black lawyer by a partner at that firm
- the lesbian professional who is told that the organization offers pet insurance for rats, pigs, and snakes, but does not offer domestic partner benefits
What does this have to do with recruiting? Aside from the obvious (turnover-->need to backfill), check this out:
One of the top four behaviors most likely to prompt someone to quit: being asked to attend extra recruiting or community related events because of one's race, gender, religion or sexual orientation.
Not only that, but 27% of respondents who experience unfairness at work in the last year said this experience "strongly discouraged them" from recommending their employer to other potential applicants.
What can employers do to prevent this? Aside from the tried and true methods (good and regular training for all supervisors, prompt and thorough investigations), the report offers other suggestions, which vary depending on the group (e.g., more/better benefits for gay and lesbian respondents, better managers for people of color).
Definitely some things to ponder.
Summary here
This sum, based on survey responses from 1,700 professionals and managers, is an estimate of "the cost of losing and replacing professionals and managers who leave their employers solely due to workplace unfairness. By adding in those for whom unfairness was a major contributor to their decision to leave, the figure is substantially greater."
Examples of the type of behavior they're talking about:
- the Arab telecommunications professional who, upon returning from visiting family in Iraq, is asked by a manager if he participated in any terrorism
- the African-American lawyer who is mistaken THREE TIMES for a different black lawyer by a partner at that firm
- the lesbian professional who is told that the organization offers pet insurance for rats, pigs, and snakes, but does not offer domestic partner benefits
What does this have to do with recruiting? Aside from the obvious (turnover-->need to backfill), check this out:
One of the top four behaviors most likely to prompt someone to quit: being asked to attend extra recruiting or community related events because of one's race, gender, religion or sexual orientation.
Not only that, but 27% of respondents who experience unfairness at work in the last year said this experience "strongly discouraged them" from recommending their employer to other potential applicants.
What can employers do to prevent this? Aside from the tried and true methods (good and regular training for all supervisors, prompt and thorough investigations), the report offers other suggestions, which vary depending on the group (e.g., more/better benefits for gay and lesbian respondents, better managers for people of color).
Definitely some things to ponder.
Summary here
Friday, August 31, 2007
More games
I've posted before (here and here) about how Google and other companies are literally using boardgames as part of their applicant screening process, and how I'm not a big fan of this technique.
The September, 2007 issue of Business 2.0 has an article titled "Job Interview Brainteasers" that highlights another type of game employers play--this time, it's asking "creative" questions during the interview.
Let's take a look at some interview questions from the article and who's asked them:
How much does a 747 weigh? (Microsoft)
Why are manhole covers round and not, say, square? (Microsoft)
How many gas stations are there in the United States? (Amazon.com)
How much would you charge for washing all the windows in Seattle? (Amazon.com)
You have 5 pirates, ranked from 5 to 1 in descending order. The top pirate has the right to propose how 100 gold coins should be divided among them. But the others get to vote on his plan, and if fewer than half agree with him, he gets killed. How should he allocate the gold in order to maximize his share but live to enjoy it? (eBay, and, similarly, Pirate Master)
You are shrunk to the height of a nickel and your mass is proportionally reduced so as to maintain your original density. You are then thrown into an empty glass blender. The blades will start moving in 60 seconds. What do you do? (Google)
These questions have been around for quite a while and are used to measure things like creativity and estimation ability. The question is: Are they any better than board games? Probably. But they're still a bad idea.
Why do I say that? Well, first of all, a lot of people find these questions plain silly. And this says something about your organization. Sure, some people think they're fun or different. But many more will scratch their head and wonder what you're thinking. And then they'll wonder if they really want to work with you. Particularly folks with a lot of experience who aren't into playing games--they want to have a serious conversation.
Second, there are simply better ways of assessing people. If you want to know how creative someone is, ask them a question that actually mirrors the job they're applying for.
Want to know how they would tackle a programming question? Ask them. In fact, you can combine assessment with recruitment, as Spock recently did.
Want them to estimate something? Think about what they'll actually be estimating on the job and ask them that question. And so on...
Another advantage of these types of questions? The answers give you information you can actually use. (Hey, you've got them in front of you, why not use their brains)
If you don't really care about the assessment side of things, and in reality are just using these questions as a way to communicate "we're cool and different" (as I suspect many of these companies are doing) there are better ways of doing this. Like communicating in interesting and personal ways (e.g., having the CEO/Director call the person). Like talking about exciting projects on the horizon. Like asking candidates what THEY think of the recruitment and assessment process (gasp!).
My advice? Treat candidates with respect and try your darnedest to make the entire recruitment and assessment process easy, informative, and as painless as possible. Now THAT'S cool and different.
The September, 2007 issue of Business 2.0 has an article titled "Job Interview Brainteasers" that highlights another type of game employers play--this time, it's asking "creative" questions during the interview.
Let's take a look at some interview questions from the article and who's asked them:
How much does a 747 weigh? (Microsoft)
Why are manhole covers round and not, say, square? (Microsoft)
How many gas stations are there in the United States? (Amazon.com)
How much would you charge for washing all the windows in Seattle? (Amazon.com)
You have 5 pirates, ranked from 5 to 1 in descending order. The top pirate has the right to propose how 100 gold coins should be divided among them. But the others get to vote on his plan, and if fewer than half agree with him, he gets killed. How should he allocate the gold in order to maximize his share but live to enjoy it? (eBay, and, similarly, Pirate Master)
You are shrunk to the height of a nickel and your mass is proportionally reduced so as to maintain your original density. You are then thrown into an empty glass blender. The blades will start moving in 60 seconds. What do you do? (Google)
These questions have been around for quite a while and are used to measure things like creativity and estimation ability. The question is: Are they any better than board games? Probably. But they're still a bad idea.
Why do I say that? Well, first of all, a lot of people find these questions plain silly. And this says something about your organization. Sure, some people think they're fun or different. But many more will scratch their head and wonder what you're thinking. And then they'll wonder if they really want to work with you. Particularly folks with a lot of experience who aren't into playing games--they want to have a serious conversation.
Second, there are simply better ways of assessing people. If you want to know how creative someone is, ask them a question that actually mirrors the job they're applying for.
Want to know how they would tackle a programming question? Ask them. In fact, you can combine assessment with recruitment, as Spock recently did.
Want them to estimate something? Think about what they'll actually be estimating on the job and ask them that question. And so on...
Another advantage of these types of questions? The answers give you information you can actually use. (Hey, you've got them in front of you, why not use their brains)
If you don't really care about the assessment side of things, and in reality are just using these questions as a way to communicate "we're cool and different" (as I suspect many of these companies are doing) there are better ways of doing this. Like communicating in interesting and personal ways (e.g., having the CEO/Director call the person). Like talking about exciting projects on the horizon. Like asking candidates what THEY think of the recruitment and assessment process (gasp!).
My advice? Treat candidates with respect and try your darnedest to make the entire recruitment and assessment process easy, informative, and as painless as possible. Now THAT'S cool and different.
Wednesday, August 29, 2007
Georgia-Pacific fined by OFCCP for using literacy test
In a display of "See? It's not just the EEOC you need to worry about", the U.S. Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) has fined the Georgia-Pacific Corp. nearly $750,000.
Why? During a "routine audit of the company's hiring practices", the OFCCP discovered that one of Georgia-Pacific's paper mills was giving job applicants a literacy test that resulted in adverse impact against African-American applicants (saw that one coming a mile away). The $749,076 will be distributed to the 399 applicants who applied for a job while the mill was using the test.
The test required applicants to read "bus schedules, product labels, and other "real-life" stimuli." The OFCCP determined that the test was not backed by sufficient evidence of validation for the particular jobs it was being used for.
The company defended itself by saying it promotes heavily from within and wanted workers to be able to move around easily.
A sensible policy, but completely irrelevant in terms of defending the legality of a test. In fact it works against an employer, since (as one of the attorneys points out) you're in effect testing people for higher-level positions, which is a no-no.
Several attorneys are quoted in the article, and they mention the importance of the Uniform Guidelines, which really only apply when a test has adverse impact, as in this case. It does make me wonder what sort of validation evidence G-P collected (if any)...
Note: the article states incorrectly that "all federal contractors" are subject to OFCCP rules. Actually only certain ones are, and details can be found here.
Hat tip.
Why? During a "routine audit of the company's hiring practices", the OFCCP discovered that one of Georgia-Pacific's paper mills was giving job applicants a literacy test that resulted in adverse impact against African-American applicants (saw that one coming a mile away). The $749,076 will be distributed to the 399 applicants who applied for a job while the mill was using the test.
The test required applicants to read "bus schedules, product labels, and other "real-life" stimuli." The OFCCP determined that the test was not backed by sufficient evidence of validation for the particular jobs it was being used for.
The company defended itself by saying it promotes heavily from within and wanted workers to be able to move around easily.
A sensible policy, but completely irrelevant in terms of defending the legality of a test. In fact it works against an employer, since (as one of the attorneys points out) you're in effect testing people for higher-level positions, which is a no-no.
Several attorneys are quoted in the article, and they mention the importance of the Uniform Guidelines, which really only apply when a test has adverse impact, as in this case. It does make me wonder what sort of validation evidence G-P collected (if any)...
Note: the article states incorrectly that "all federal contractors" are subject to OFCCP rules. Actually only certain ones are, and details can be found here.
Hat tip.
Tuesday, August 28, 2007
A funny employment lawyer
Of course they exist. If you don't know one, you do now.
Mark Toth is the Chief Legal Officer at Manpower and he's just started a blog on employment law that so far is highly amusing.
For example, he sings a song about employment law.
A song.
About employment law.
I mean, you gotta be into this stuff to go that far.
He's also got a REALLY BAD hiring interview up for you to watch, along with his top 10 "employment law greatest hits."
My personal favorite? #6: "Communicate, communicate, communicate (unless you communicate stupidly)"
One of the more creative blogs I've seen. Here's to hoping it lasts.
And no, I won't be singing a song about assessment. Unless you really want me to (and trust me, you don't want me to).
Hat tip.
Mark Toth is the Chief Legal Officer at Manpower and he's just started a blog on employment law that so far is highly amusing.
For example, he sings a song about employment law.
A song.
About employment law.
I mean, you gotta be into this stuff to go that far.
He's also got a REALLY BAD hiring interview up for you to watch, along with his top 10 "employment law greatest hits."
My personal favorite? #6: "Communicate, communicate, communicate (unless you communicate stupidly)"
One of the more creative blogs I've seen. Here's to hoping it lasts.
And no, I won't be singing a song about assessment. Unless you really want me to (and trust me, you don't want me to).
Hat tip.
Monday, August 27, 2007
National Work Readiness Credential
Have you heard about the National Work Readiness Credential?
It's a 3-hour pass-or-fail assessment delivered over the web that is designed to measure competencies critical for entry-level workers, and consists of four modules:
1. Oral language (oral language comprehension and speech)
2. Situational judgment (cooperating, solving problems, etc.)
3. Reading
4. Math
I love the idea of a transferable skills test; kinda like the SAT of the work world. I think this approach, combined with assess-and-select-yourself notions are two of the truly creative directions we're going in.
Downsides?
(1) Right now it's not available in all areas of the country.
(2) A searchable database (either as a recruiting tool or as a verification) would be great.
(3) Last but not least, employers have to be cautious that the position they're hiring for truly requires the competencies measured by this exam.
But all that aside, a promising idea. It will be interesting to see where this goes.
Here are links to some of the many resources available:
Brochure
Training guide
Candidate handbook
Assessment sites
Appropriate uses
FAQs
It's a 3-hour pass-or-fail assessment delivered over the web that is designed to measure competencies critical for entry-level workers, and consists of four modules:
1. Oral language (oral language comprehension and speech)
2. Situational judgment (cooperating, solving problems, etc.)
3. Reading
4. Math
I love the idea of a transferable skills test; kinda like the SAT of the work world. I think this approach, combined with assess-and-select-yourself notions are two of the truly creative directions we're going in.
Downsides?
(1) Right now it's not available in all areas of the country.
(2) A searchable database (either as a recruiting tool or as a verification) would be great.
(3) Last but not least, employers have to be cautious that the position they're hiring for truly requires the competencies measured by this exam.
But all that aside, a promising idea. It will be interesting to see where this goes.
Here are links to some of the many resources available:
Brochure
Training guide
Candidate handbook
Assessment sites
Appropriate uses
FAQs
Friday, August 24, 2007
Links a go-go for 8-24-07
Good reading for August 24, 2007:
OFCCP issues final regulations implementing Jobs for Veterans Act of 2002 (job banks for postings listed here), and...
OFCCP also posts interim guidance on use of race and ethnic categories (direct link here)
Are you sure you know where your hires are coming from? (hint: beware drop-down boxes)
Jobmatchbox does the 50 top recruiting blogs
Interview questions you can ask--and those you can't (includes simplistic video!)
Tracking adverse impact
Favorite defense motions in limine for employment cases
Does harassment training lead to more lawsuits? (hat tip)
The housing market and its relationship to recruiting
New regulations on no-match letters: Ho-hum?
OFCCP issues final regulations implementing Jobs for Veterans Act of 2002 (job banks for postings listed here), and...
OFCCP also posts interim guidance on use of race and ethnic categories (direct link here)
Are you sure you know where your hires are coming from? (hint: beware drop-down boxes)
Jobmatchbox does the 50 top recruiting blogs
Interview questions you can ask--and those you can't (includes simplistic video!)
Tracking adverse impact
Favorite defense motions in limine for employment cases
Does harassment training lead to more lawsuits? (hat tip)
The housing market and its relationship to recruiting
New regulations on no-match letters: Ho-hum?
Thursday, August 23, 2007
Big Disability Discrimination Decision for California Employers
On August 23, 2007, the California Supreme Court published an important decision in the case of Green v. State of California. The decision should be reviewed by any employer covered by California's Fair Employment and Housing Act (FEHA), which like the Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities.
What'd they say? Rather than muddy the waters, I'll quote directly from the case:
"The FEHA prohibits discrimination against any person with a disability but, like the ADA, provides that the law allows the employer to discharge an employee with a physical disability when that employee is unable to perform the essential duties of the job even with reasonable accommodation. (§ 12940, subd. (a)(1); 42 U.S.C. § 12112(a).) After reviewing the statute's language, legislative intent, and well-settled law, we conclude the FEHA requires employees to prove that they are qualified individuals under the statute just as the federal ADA requires." (pp. 1-2)
"...we conclude that the Legislature has placed the burden on a plaintiff to show that he or she is a qualified individual under the FEHA (i.e., that he or she can perform the essential functions of the job with or without reasonable accommodation)." (p. 5)
What does this mean? It means employers covered by FEHA can breathe a little easier, and employees bringing suit under FEHA for a disability claim may have a slightly more uphill battle. The court has now made clear that in these cases it is the plaintiff/employee who has the burden of showing they are "qualified" under FEHA, not the defendant/employer. And if the plaintiff can't satisfy this "prong" of their case, they won't win.
...unless this case is appealed to the U.S. Supreme Court...
What'd they say? Rather than muddy the waters, I'll quote directly from the case:
"The FEHA prohibits discrimination against any person with a disability but, like the ADA, provides that the law allows the employer to discharge an employee with a physical disability when that employee is unable to perform the essential duties of the job even with reasonable accommodation. (§ 12940, subd. (a)(1); 42 U.S.C. § 12112(a).) After reviewing the statute's language, legislative intent, and well-settled law, we conclude the FEHA requires employees to prove that they are qualified individuals under the statute just as the federal ADA requires." (pp. 1-2)
"...we conclude that the Legislature has placed the burden on a plaintiff to show that he or she is a qualified individual under the FEHA (i.e., that he or she can perform the essential functions of the job with or without reasonable accommodation)." (p. 5)
What does this mean? It means employers covered by FEHA can breathe a little easier, and employees bringing suit under FEHA for a disability claim may have a slightly more uphill battle. The court has now made clear that in these cases it is the plaintiff/employee who has the burden of showing they are "qualified" under FEHA, not the defendant/employer. And if the plaintiff can't satisfy this "prong" of their case, they won't win.
...unless this case is appealed to the U.S. Supreme Court...
Stop playing games
First, Google and PricewaterhouseCoopers have prospective candidates playing with Lego blocks.
Now, another company has candidates playing Monopoly (see minute 1:50) to judge multi-tasking ability.
C'mon people. You don't need to play games. Spend just a little time putting together a good assessment. Just follow these simple steps:
1. Study the job. Figure out what the key KSAs/competencies needed day one are. And spend more than 5 minutes doing it.
2. Think about what JOB TASK you could re-create in a simulation that would measure the required competencies.
3. Spend some time putting together the exercise and how you will rate it. Spend some more time on it. Practice it. Then spend some more time preparing.
4. Give it. Rate it. Treat candidates with respect throughout the process.
5. Gather performance data once people are on the job and see if it predicts job performance.
6. Hire a professional to fix your mistakes. No, I'm kidding. If you've done the other steps right, you should be golden.
Stop playing games and stop making candidates play them. If you want to know how well an Office Manager candidate multi-tasks, put them in a scenario that matches what they would really face on the job. Phones ringing, Inbox filling up, managers at your door. Not playing with phony money.
Now, another company has candidates playing Monopoly (see minute 1:50) to judge multi-tasking ability.
C'mon people. You don't need to play games. Spend just a little time putting together a good assessment. Just follow these simple steps:
1. Study the job. Figure out what the key KSAs/competencies needed day one are. And spend more than 5 minutes doing it.
2. Think about what JOB TASK you could re-create in a simulation that would measure the required competencies.
3. Spend some time putting together the exercise and how you will rate it. Spend some more time on it. Practice it. Then spend some more time preparing.
4. Give it. Rate it. Treat candidates with respect throughout the process.
5. Gather performance data once people are on the job and see if it predicts job performance.
6. Hire a professional to fix your mistakes. No, I'm kidding. If you've done the other steps right, you should be golden.
Stop playing games and stop making candidates play them. If you want to know how well an Office Manager candidate multi-tasks, put them in a scenario that matches what they would really face on the job. Phones ringing, Inbox filling up, managers at your door. Not playing with phony money.
Subscribe to:
Posts (Atom)