Celebrating 10 years of the science and practice of matching employer needs with individual talent.
Wednesday, December 31, 2008
Most popular posts of 2008
As we say goodbye to 2008, I thought it time to review which topics people seem to be the most interested in--on this blog at least.
So without further ado, here, in descending order, were my top 10 most popular posts of 2008:
1. Adverse impact on personality and work sample tests
2. Can the Dewey Color System be used for hiring?
3. What the new ADA law means for recruitment and assessment
4. What does your selection process say about your culture?
5. A review of situational judgment tests
6. B = f (P, E)
7. One man's journey into Talent Management software
8. Real world value of strategic HRM practices
9. Grading topgrading
10. Staffing.org's 2008 benchmark report
So what do I take from all this? Reader interest varies, but combined with the sidebar poll on the homepage, it remains obvious that personality testing is far and away the most popular topic. Hiring managers continue to be very interested in selecting candidates for more than just their ability, and HR professionals continue their quest for the best way to do this. There are very good personality tests out there, but I get the feeling no one's discovered the holy grail here yet.
What I'd like to see in 2009 is much more emphasis on candidate self-selection. Realistic job previews (descriptions and multi-media), assessments used for feedback rather than selection, and much much more. It's a very exciting direction, so let's get to it!
So Happy New Year, keep up the good work, and thanks for reading!
Oh, and please complete Dr. Charles Handler's survey. Thanks!
Monday, December 22, 2008
Holiday HR Humor
One of my favorite games at work is HRspeak. Using a few words and phrases that seem to be overly popular in some HR circles, we create sentences that sound good but are practically meaningless.
For example:
"Hey, watcha up to?"
"Not much, just transforming our operational cluster into customer-centric business practices that strategically meet our mission-critical goals using knowledge-based ROI-driven solutions."
Why this is funny (at least to a very small segment of the population) is because an even smaller segment of the population actually finds language like this to be practically useful. Typically in my experience it's HR consultants who are trying to convince HR to reinvent itself. The concept may be sound, but the words get in the way.
As usual, Scott Adams picks up on this phenomenon (and makes it much funnier than I can) in a recent Dilbert strip.
This time of year, often when people reflect and plan, let's all try to communicate a little more clearly.
And here's hoping that you and yours enjoy a joy-filled, family and friend-centric season focused around cuisine-based activities that meet your critical soul-based needs.
For example:
"Hey, watcha up to?"
"Not much, just transforming our operational cluster into customer-centric business practices that strategically meet our mission-critical goals using knowledge-based ROI-driven solutions."
Why this is funny (at least to a very small segment of the population) is because an even smaller segment of the population actually finds language like this to be practically useful. Typically in my experience it's HR consultants who are trying to convince HR to reinvent itself. The concept may be sound, but the words get in the way.
As usual, Scott Adams picks up on this phenomenon (and makes it much funnier than I can) in a recent Dilbert strip.
This time of year, often when people reflect and plan, let's all try to communicate a little more clearly.
And here's hoping that you and yours enjoy a joy-filled, family and friend-centric season focused around cuisine-based activities that meet your critical soul-based needs.
Wednesday, December 17, 2008
The future of selection
Last week I presented at the Personnel Testing Council of Northern California's monthly luncheon. The title was "Selection in a changing world: What will we be doing, and who will be doing it?"
The topic was motivated mostly by my own experiences related to what's going on in our field lately. With many of us facing severe budget challenges, I got to thinking about things like:
- What do we need to be doing, what should we be doing, and how can we add value in lean times?
- What impact does automation (e.g., applicant tracking systems) have on the work we do?
- What impact does automation have on the competencies we need?
The field of HR has been talking about transformation for years now, with IMHO only partial success. It's time for assessment professionals to take a look at ourselves and determine if we're where we need to be.
You can see the slides here:
They should also be posted soon on PTC-NC's website.
Tuesday, December 09, 2008
Resume <> Personality
I don't know about you, but one of my least favorite forms of assessment is pouring through resumes. They're not standardized, they leave out important details, and often provide way too many details about things we don't care about. But most importantly, it just doesn't feel like a very valid way of making inferences about candidates.
There are good reasons to dislike this activity. Not only are there rampant self-inflation problems, the inferences recruiters tend to make about applicant personality are erroneous, according to a recent study. After looking at responses from 244 recruiters, the authors found several important results:
1) Low interrater reliability -- in other words, the recruiters didn't agree with each other very often about what the resume said about an applicant's personality.
2) When correlations were made between recruiters' inferences of personality with actual Big 5 scores from the applicants, low levels of validity were found (slightly better for conscientiousness and openness to experience).
3) Despite the two findings above, rater perception of extraversion, oppenness to experience, and conscientiousness predicted their assessment of employability of the applicants.
Lesson? Be very careful what you imply from a resume. Think carefully about the facts you're using to infer personality. If you must use them, screen out only those who lack the basic qualifications to do the job. Follow up the resume screen with a number of much more valid assessments--work sample tests, structured interviews, in-depth reference checks, etc.
Wednesday, December 03, 2008
New evidence of the power of GMA
One of the biggest areas of focus for personnel psychologists is uncovering which selection mechanisms do the best job of predicting job performance.
Different researchers have focused on various tests, but perhaps no tests have received as much attention as those that measure general mental ability (GMA). GMA has consistently been shown to produce the highest criterion-related validity (CRV) values and has some very strong proponents. (For those of you not up on your statistics, CRV refers to the statistical relationship between test scores and subsequent job or training performance; with a maximum value of 1.0, the bigger, the better)
One of the most strident advocates of ability testing is Frank Schmidt, who has studied and written extensively on the topic. You may have heard of the widely cited article he co-authored with John Hunter in 1998. In that article, they present a CRV value of .51 for cognitive ability tests, which is considered excellent. Only work samples received a higher score, but this value has been subsequently questioned.
In the latest issue of Personnel Psychology (v61, #4), Schmidt and his colleagues present an updated CRV value, and it's even higher. Using what they claim is a more accurate way of correcting for range restriction, the authors present an overall value of .734 for job performance and .760 for training performance. This value is the highest I've seen reported in a major study such as this and further solidifies GMA as "the construct to beat" when predicting performance.
The article also uses this same updated statistical approach to looking at the CRV of two personality variables that have been generally supported--Conscientiousness (Con) and Emotional Stability (ES). The values presented for these unfortunately were not that much larger than previously reported: for Con: .332 (.367) and for ES: -.100 (-.106) for job (training) performance.
That all being said, there are some things to note:
1) Use of GMA tests for selection are likely to produce substantial adverse impact with most applicant samples of any substantial size, potentially limiting their usage in many cases.
2) CRV coefficients are just one "type" of validity evidence. The calculation is far from perfect and depends greatly on the criterion being used. The authors admit that they were unable to measure the prediction of contextual performance, which could have resulted in substantially higher values for the personality variables.
3) On a related note, some of the largest CRV values for personality tests I've seen were reported in Hogan & Holland (2003), where they aligned predictor and criterion constructs. This study was excluded from the current study because "the performance criteria they employed were specific dimensions of job performance rather than overall job performance."
4) The lower values reported in this study for personality measures may also reflect the way personality is measured, which the authors acknowledge. They suggest using outside raters as well as multiple scales for the same constructs may yield higher CRV values. Interestingly, they also suggest that personality may not be as important because with sufficient GMA, individuals can make up for any weaknesses--such as forcing yourself to frequently speak with others even if you're an introvert.
5) CRV values for GMA continued to vary substantially depending on the complexity of the job, yielding values that ranged .20-.30 apart from one another. This is a key point and is related to the fact that the type of job--and job performance--matters when generating these numbers.
Last but not least, there's another great article in this issue, devoted to (coincidentally) conducting CRV studies by Van Iddekinge and Ployhart--check it out. They go into detail about many issues directly relevant to the study above.
Tuesday, November 25, 2008
Giving thanks for research
It's almost Thanksgiving here in the U.S., a time to give thanks, and I'd like to thank a largely unsung group of people. Thank you to all the researchers out there who try to help us put some science around the art we call personnel recruitment and selection. Thank you for all your work and insights.
What better way to celebrate this wish of thanks than by talking about a new issue of the International Journal of Selection and Assessment (v16, #4)! As usual it's chalk full of good articles, so let's take a look at some of them.
First, a study of applicant perceptions of credit checks, something many of us do for sensitive positions. Using samples of undergraduates, Kuhn and Nielsen found mostly negative reactions, especially for older participants, but they varied with the explanation given as well as privacy expectations. Worth a look for any of you that conduct large numbers of background checks (and if you do, don't miss the Oppler et al. study below).
Next up, a fascinating study of police officer selection in the Netherlands. Using data from over 3,000 applicants, De Meijer et al. found evidence for differential validity between ethnic majority and minority participants. Specifically, cognitive ability tests predicted training performance for minorities but not for those in the majority. Performance prediction for the latter group was low for cognitive ability tests and somewhat better using non-cognitive ability variables. By the way, the dissertation of the primary author, a fascinating look at similar issues, can be found here.
The third article is one of those articles that almost (...almost) makes me want to pay for it, and anybody interested in electronic applicant issues take note. In this study, Dunleavy et al. used simulations to show the tremendous impact that small numbers of applicants can have on adverse impact (AI) analysis. In fact, the authors reveal situations where AI can be caused or masked by a single applicant applying multiple times! The authors present ways of identifying and handling these cases. Scary stuff. Hope the OFCCP is reading.
Fourth, Lievens and Peeters present results of a study of elaboration and its impact on faking situational judgment tests. Using master students, the researchers found that requiring elaboration on items (i.e., the reason they chose the response) had several positive results. It reduced faking on items with high familiarity. It also reduced the percentage of "fakers" in the top of the distribution. Lastly, candidates reported that the elaboration allowed them to better demonstrate their KSAs. This could be a great strategy for those of you worried about the inflation effects of administering SJTs online.
Next, Furnham et al. with a study of assessment center ratings. The authors found that expert ratings of "personal assertiveness", "toughness and determination", and "curiosity" were significantly correlated with participant personality scores, particularly Extraversion. Correlations with intelligence test scores were low.
Last but definitely not least, Oppler et al. discuss results of a rare empirical study of financial history and its relationship to counterproductive work behaviors (CWBs). Using a "random sample of 2519 employees" the authors found that those with financial history "concerns" were significantly more likely to demonstrate CWBs after hire. Great support for conducting these types of checks.
There are other articles in here, so I encourage you to check them all out. Thank goodness for research!
What better way to celebrate this wish of thanks than by talking about a new issue of the International Journal of Selection and Assessment (v16, #4)! As usual it's chalk full of good articles, so let's take a look at some of them.
First, a study of applicant perceptions of credit checks, something many of us do for sensitive positions. Using samples of undergraduates, Kuhn and Nielsen found mostly negative reactions, especially for older participants, but they varied with the explanation given as well as privacy expectations. Worth a look for any of you that conduct large numbers of background checks (and if you do, don't miss the Oppler et al. study below).
Next up, a fascinating study of police officer selection in the Netherlands. Using data from over 3,000 applicants, De Meijer et al. found evidence for differential validity between ethnic majority and minority participants. Specifically, cognitive ability tests predicted training performance for minorities but not for those in the majority. Performance prediction for the latter group was low for cognitive ability tests and somewhat better using non-cognitive ability variables. By the way, the dissertation of the primary author, a fascinating look at similar issues, can be found here.
The third article is one of those articles that almost (...almost) makes me want to pay for it, and anybody interested in electronic applicant issues take note. In this study, Dunleavy et al. used simulations to show the tremendous impact that small numbers of applicants can have on adverse impact (AI) analysis. In fact, the authors reveal situations where AI can be caused or masked by a single applicant applying multiple times! The authors present ways of identifying and handling these cases. Scary stuff. Hope the OFCCP is reading.
Fourth, Lievens and Peeters present results of a study of elaboration and its impact on faking situational judgment tests. Using master students, the researchers found that requiring elaboration on items (i.e., the reason they chose the response) had several positive results. It reduced faking on items with high familiarity. It also reduced the percentage of "fakers" in the top of the distribution. Lastly, candidates reported that the elaboration allowed them to better demonstrate their KSAs. This could be a great strategy for those of you worried about the inflation effects of administering SJTs online.
Next, Furnham et al. with a study of assessment center ratings. The authors found that expert ratings of "personal assertiveness", "toughness and determination", and "curiosity" were significantly correlated with participant personality scores, particularly Extraversion. Correlations with intelligence test scores were low.
Last but definitely not least, Oppler et al. discuss results of a rare empirical study of financial history and its relationship to counterproductive work behaviors (CWBs). Using a "random sample of 2519 employees" the authors found that those with financial history "concerns" were significantly more likely to demonstrate CWBs after hire. Great support for conducting these types of checks.
There are other articles in here, so I encourage you to check them all out. Thank goodness for research!
Thursday, November 20, 2008
Does vendor size matter?
Yesterday I attended a demonstration by a smallish firm whose product automates the job application, exam administration, and applicant tracking process. We talked about a lot of different things, including how easy it would be for end users to understand.
After the vendor left, I had a discussion with another of the attendees about the meeting. But our conversation wasn't so much about the product as it was about the company. We talked about the small size of the firm and how much of an issue that is when selecting the vendor.
On the one hand, it would seem small firms are more susceptible to succession planning issues. They're typically run by a charismatic, passionate, and extremely talented individual whose energy continually sustains the business. What happens when they're gone? They may also not have the built in redundancies that larger firms have, as well as the capacity to handle larger projects.
On the other hand, in my experience it's the quality of the product and the support that matters most for an IT implementation, not sheer size. Does the vendor "get" the customer? Do they have experience with the relevant issues? Are they honest about the product's capabilities and time frames? These are the factors I've found to be most important.
What do YOU think? Is vendor size relevant? Is it a make or break issue? I've temporarily turned off the registration requirement for the blog, so anyone can comment. I'm interested in hearing from users as well as vendors.
After the vendor left, I had a discussion with another of the attendees about the meeting. But our conversation wasn't so much about the product as it was about the company. We talked about the small size of the firm and how much of an issue that is when selecting the vendor.
On the one hand, it would seem small firms are more susceptible to succession planning issues. They're typically run by a charismatic, passionate, and extremely talented individual whose energy continually sustains the business. What happens when they're gone? They may also not have the built in redundancies that larger firms have, as well as the capacity to handle larger projects.
On the other hand, in my experience it's the quality of the product and the support that matters most for an IT implementation, not sheer size. Does the vendor "get" the customer? Do they have experience with the relevant issues? Are they honest about the product's capabilities and time frames? These are the factors I've found to be most important.
What do YOU think? Is vendor size relevant? Is it a make or break issue? I've temporarily turned off the registration requirement for the blog, so anyone can comment. I'm interested in hearing from users as well as vendors.
Monday, November 17, 2008
New blog--and newsletter
Over at U.S. Research Associates, my friend and colleague Dr. Jim Higgins has started a blog, and a newsletter, devoted to topics such as personnel selection and employment discrimination. They're both worth checking out.
Jim starts his blog with a post about legally defensible competency modeling for selection. He offers some great words of advice to those of you thinking about (or already) using competencies for selection. For example: what validation strategy is appropriate? (hint: it's not content validation)
The first issue of the newsletter (November 2008) starts with the same article but includes several others you might be interested in. Topics include:
- Why employers lose legal challenges to their testing process
- Adverse impact analysis using binary logistic regression
- A review of several statistical analysis software packages
- An overview of criterion-related validation
- Multiple regression and the OFCCP
Good stuff! Jim also offers several free on-line training classes on topics such as criterion validation, adverse impact, and Microsoft Excel.
Wednesday, November 12, 2008
SkillSurvey adds passive candidate search
About a year ago I posted about two new websites that automate the reference checking process--Checkster and SkillSurvey. Turns out it was one of the most popular posts I've ever done, so there must be an interest out there in making this headache-inducing process easier. I'm still a big fan of both services, and had the pleasure of meeting Yves Lermusi, CEO of Checkster, at this year's IPMAAC conference.
Anyway, I got an email the other day from SkillSurvey about some cool new functionality they've implemented. Back when I wrote the original post, it occurred to me that one eventual outcomes of having all this reference check data is the database that's being generated. Imagine the sourcing possibilities if each reference gives contact information on 5 other individuals in their field!
Well, SkillSurvey apparently had the same thought and this new function (with the somewhat cumbersome name of Passive Candidate Compiler) allows users to search their own database. A lot of recruiters already maintain a database of reference-givers, but this automates the whole process, saving tons of time.
Pretty slick if you ask me. Next step? How about access to the entire database?
Anyway, I got an email the other day from SkillSurvey about some cool new functionality they've implemented. Back when I wrote the original post, it occurred to me that one eventual outcomes of having all this reference check data is the database that's being generated. Imagine the sourcing possibilities if each reference gives contact information on 5 other individuals in their field!
Well, SkillSurvey apparently had the same thought and this new function (with the somewhat cumbersome name of Passive Candidate Compiler) allows users to search their own database. A lot of recruiters already maintain a database of reference-givers, but this automates the whole process, saving tons of time.
Pretty slick if you ask me. Next step? How about access to the entire database?
Friday, November 07, 2008
The red clothing effect
A little fun for our Friday...
I'm not a big fan of interviews. Particularly ones that are unstructured (e.g., different questions for different candidates, no rating scales, etc.).
Why? Aside from the fact that research has shown them to be much less predictive of job success, they're problematic because most people think they are above average interviewers (they're obviously not), that they're particularly good at picking up things like deception and lies (once again...they're not), and that interviews are done easily and quickly (they shouldn't be).
Another reason interviews are tricky is they're susceptible to all kinds of perceptual errors. Some of the more common ones include:
- The "Halo" effect: something about the candidate biases the way you see other things about them. My favorite example is having positive feelings toward someone because they went to your alma mater.
- The contrast effect: your opinion of a candidate is biased because they followed a particularly good or bad candidate.
- The fatigue effect: the way you evaluate candidates changes over the course of a day or week because you get tired of interviewing.
This is just a sample of cognitive biases that enter into the interview process. Other typically non-job related factors come also into play, such as someone's height.
Now we may have to add the red clothing effect. A recent study of undergraduates at the University of Rochester found that the color red, compared to other colors, led heterosexual men to find women more attractive (it had no effect on female participants' perception of other females).
The researchers validated this (albeit with small samples) using a variety of experiments, including digitally altering the shirt color of the same image. They even looked at other factors such as willingness to ask out on a date (hint for straight women: chose red over blue).
The silver lining for us is that color had no impact on perceptions of likability, kindness, or intelligence. Still, it's something to be aware of that could potentially have important consequences. After all, remember what happened to Neo.
Monday, November 03, 2008
Webinar on using blogs for recruitment
We all know blogs can be a great way to share information. But what about using them for recruiting purposes?
I wrote about this a while back--how blogs can be used as information "push" and "pull" mechanisms or even as a retention tool--but if you're interested in learning more, check out this webinar on Thursday November 6th put on by Bernard Hodes and the American Hospital Association.
Speaking of webinars, in this time of budgetary challenge, make sure to check out all the free webinars offered by places like HR.com and HCI. Could be a good time to focus on internal competencies. Just be wary of giving out your email address too freely--unless you like lots of email.
I wrote about this a while back--how blogs can be used as information "push" and "pull" mechanisms or even as a retention tool--but if you're interested in learning more, check out this webinar on Thursday November 6th put on by Bernard Hodes and the American Hospital Association.
Speaking of webinars, in this time of budgetary challenge, make sure to check out all the free webinars offered by places like HR.com and HCI. Could be a good time to focus on internal competencies. Just be wary of giving out your email address too freely--unless you like lots of email.
Wednesday, October 29, 2008
Upcoming webinar on defending your tests
Tests aren't valid or invalid per se--it depends on what you use them for.
But if your tests are challenged legally (say, because they have a discriminatory impact against a protected group), one of the things you'll want to defend yourself with is a test validation report--a documentation of why the test was developed, how it was developed, and the purposes for which the test should be used.
This is one of the topics that will be covered in an upcoming webinar sponsored by Talent Management and presented by some well known folks over at APT. Taking place on November 11th at 11am PST, the webinar is titled "Testing the Test: What You Need to Know about Test Validation, Litigation, and Risk Management."
Should be worth a watch/listen.
On a related note, the latest issue of Talent Management had some good articles in it, including ones on "role based" assessment (which just sounds like good 'ol fashioned position-based assessment) and employee surveys. It's actually not a bad little magazine, and it's free. You can subscribe here.
But if your tests are challenged legally (say, because they have a discriminatory impact against a protected group), one of the things you'll want to defend yourself with is a test validation report--a documentation of why the test was developed, how it was developed, and the purposes for which the test should be used.
This is one of the topics that will be covered in an upcoming webinar sponsored by Talent Management and presented by some well known folks over at APT. Taking place on November 11th at 11am PST, the webinar is titled "Testing the Test: What You Need to Know about Test Validation, Litigation, and Risk Management."
Should be worth a watch/listen.
On a related note, the latest issue of Talent Management had some good articles in it, including ones on "role based" assessment (which just sounds like good 'ol fashioned position-based assessment) and employee surveys. It's actually not a bad little magazine, and it's free. You can subscribe here.
Wednesday, October 22, 2008
Leadership podcast
Seems to be the season to discuss leadership. Wonder why?
A recent guest on the Doug "lawyer to peacemaker" Noll show was Dr. Robert Hogan, renowned expert on leadership and personality and President of Hogan Assessment Systems. Dr. Hogan discussed several things, including:
- how evolution relates to leadership and followership
- why the current trend of focusing on "strengths" may be misguided
- the base rate of bad leaders (hint: it's not 5%)
- what followers want in a leader (hint: it's not a big ego)
Good stuff. Reminds me that I need to join 2002 and start doing more audio/video.
Thursday, October 16, 2008
Discrimination, assessment centers, and handshakes
The title sounds like a strange combination, no? That's because it refers to three separate pieces of research published in the September 2008 Journal of Applied Psychology.
First, Umphress et al. describe a study that demonstrates how important leaders can be in setting the tone for selection. Specifically, the authors found that when authority figures focused team selection decisions on job performance factors, individuals that had a tendency to discriminate based on social dominance orientation were less likely to do so. Implication? To help avoid discriminatory hiring and promotion decisions, focus decision makers on job-related performance factors. That way, they're less likely to rely on their own biases.
Next, Meriac et al. with a meta-analysis of the incremental validity of assessment center (AC) ratings over other assessment tools. Specifically, the authors found that AC ratings explained a "sizable proportion of variance in job performance" beyond cognitive ability and personality tests. Good news for fans of assessment centers out there.
Last but not least, Stewart, et al. describe the results of a study of 98 undergraduate students that participated in mock interviews. In the words of the authors, "quality of handshake was related to hiring recommendations." How exactly does that work? Apparently how you shake hands sends messages about your degree of extraversion (above and beyond your appearance). The authors also found that the effect seemed to be stronger for women than men. Implication? For those of you interviewing for sales jobs, pay attention to your handshake!
Honorable mentions:
- Judge & Livingston on how traditional gender role orientation impacts the wage gap
- Cascio & Aguinis on trends in I/O psychology from 1963 - 2007 (good stuff; read here)
- Chiaburu & Harrison's meta-analysis on how co-workers impact job performance
- Levi & Fried on differences between African Americans and Whites on attitudes toward affirmative action programs
First, Umphress et al. describe a study that demonstrates how important leaders can be in setting the tone for selection. Specifically, the authors found that when authority figures focused team selection decisions on job performance factors, individuals that had a tendency to discriminate based on social dominance orientation were less likely to do so. Implication? To help avoid discriminatory hiring and promotion decisions, focus decision makers on job-related performance factors. That way, they're less likely to rely on their own biases.
Next, Meriac et al. with a meta-analysis of the incremental validity of assessment center (AC) ratings over other assessment tools. Specifically, the authors found that AC ratings explained a "sizable proportion of variance in job performance" beyond cognitive ability and personality tests. Good news for fans of assessment centers out there.
Last but not least, Stewart, et al. describe the results of a study of 98 undergraduate students that participated in mock interviews. In the words of the authors, "quality of handshake was related to hiring recommendations." How exactly does that work? Apparently how you shake hands sends messages about your degree of extraversion (above and beyond your appearance). The authors also found that the effect seemed to be stronger for women than men. Implication? For those of you interviewing for sales jobs, pay attention to your handshake!
Honorable mentions:
- Judge & Livingston on how traditional gender role orientation impacts the wage gap
- Cascio & Aguinis on trends in I/O psychology from 1963 - 2007 (good stuff; read here)
- Chiaburu & Harrison's meta-analysis on how co-workers impact job performance
- Levi & Fried on differences between African Americans and Whites on attitudes toward affirmative action programs
Friday, October 10, 2008
FAA uses games to hire and train
Turns out all I have to do is post about how we should be using video games for recruitment and assessment, and an example appears!
In this recent article in the New York Times (should be first link), the author describes how the Federal Aviation Administration (FAA) is using a sophisticated simulator to train air traffic controllers.
Motivated primarily by the impending retirement wave and massive need for new controllers (1,700 a year for the next 10 years) , the FAA has developed a multi-screen simulator that allows trainees to hone their skills in a safe but semi-realistic environment. From the article:
"The tower simulation is realistic. Aircraft first appear as tiny dots against blue sky, clouds or stars. On the ground, drivers of maintenance trucks ask permission to cross a runway so they can fix a lighted sign. A click of the instructor’s mouse can shift the time of day, and change the weather — from rain to hail or cloudy to clear. To make the simulations as unpredictable as in the real world, some pilots ignore instructions."
But it's not only the training that's innovative. The FAA's screening process puts most of ours to shame. Specifically, the candidates complete a six hour (which might be overkill) computerized aptitude test that measures geometry and math ability.
This is followed by "game-like tests" designed to measure things like ability to work under pressure, maintain "situational awareness", short- and long-term memory, multitasking, and flexibility. The tests vary from air traffic simulations to ones that look like Frogger or Tetris.
So someone out there gets it! The system is even decribed as "a big Xbox."
But they could do even better. Here are some ideas how:
1. Use the multi-screen simulator for recruitment and selection, not just training. I really hope they show the system off during recruitment open houses--I know I would. And if it isn't cost prohibitive, and makes sense given entry level requirements, why not use the simulator as part of the screening process?
2. They've got a pretty good recruitment site and make good use of video. Why not add a java-powered mini-game that simulates the job? Maybe have a leader board and allow people to put in their e-mail and opt-in to getting more information about becoming an air traffic controller?
3. On a related note, why not go the route of America's Army and mix in a little SimCity and Flight Simulator and produce a more full-featured game that simulates the job? Again, players could have the option of uploading their scores to a public website, and allow them to enter their email (securely) to get more info?
Not all of our jobs lends themselves so well to simulations and video (I'm not sure SimHRManager would be very popular). But whenever possible, let's take advantage of the technology around us!
In this recent article in the New York Times (should be first link), the author describes how the Federal Aviation Administration (FAA) is using a sophisticated simulator to train air traffic controllers.
Motivated primarily by the impending retirement wave and massive need for new controllers (1,700 a year for the next 10 years) , the FAA has developed a multi-screen simulator that allows trainees to hone their skills in a safe but semi-realistic environment. From the article:
"The tower simulation is realistic. Aircraft first appear as tiny dots against blue sky, clouds or stars. On the ground, drivers of maintenance trucks ask permission to cross a runway so they can fix a lighted sign. A click of the instructor’s mouse can shift the time of day, and change the weather — from rain to hail or cloudy to clear. To make the simulations as unpredictable as in the real world, some pilots ignore instructions."
But it's not only the training that's innovative. The FAA's screening process puts most of ours to shame. Specifically, the candidates complete a six hour (which might be overkill) computerized aptitude test that measures geometry and math ability.
This is followed by "game-like tests" designed to measure things like ability to work under pressure, maintain "situational awareness", short- and long-term memory, multitasking, and flexibility. The tests vary from air traffic simulations to ones that look like Frogger or Tetris.
So someone out there gets it! The system is even decribed as "a big Xbox."
But they could do even better. Here are some ideas how:
1. Use the multi-screen simulator for recruitment and selection, not just training. I really hope they show the system off during recruitment open houses--I know I would. And if it isn't cost prohibitive, and makes sense given entry level requirements, why not use the simulator as part of the screening process?
2. They've got a pretty good recruitment site and make good use of video. Why not add a java-powered mini-game that simulates the job? Maybe have a leader board and allow people to put in their e-mail and opt-in to getting more information about becoming an air traffic controller?
3. On a related note, why not go the route of America's Army and mix in a little SimCity and Flight Simulator and produce a more full-featured game that simulates the job? Again, players could have the option of uploading their scores to a public website, and allow them to enter their email (securely) to get more info?
Not all of our jobs lends themselves so well to simulations and video (I'm not sure SimHRManager would be very popular). But whenever possible, let's take advantage of the technology around us!
Tuesday, October 07, 2008
Googling applicants, RJPs, and engagement
The September 2008 Issues of Merit was just released by the U.S. Merit Systems Protection Board and has some articles worth a quick read:
- The drawbacks inherent in doing internet searches of potential applicants (e.g., finding inaccurate or misleading information, gathering information that could be potentially discriminatory)
- Realistic job previews (of which I'm a huge fan)--these can take the form of videos, classes (on-line or otherwise), or simply a better description of the job. MSPB gives the example that they provide applicants with a list of "This job might be for you if..." factors along side "This job might not be for you if..."
I've found "willingness" or "pre-screening" questionnaires to be helpful as well, which simply have candidates answer a series of questions related to the screening process (e.g, "Are you willing to have your credit reports checked?") or the job itself (e.g., "Are you willing to come in contact with toxic chemicals on a daily basis?")
- Performance management and employee engagement. MSPB studies have shown that the PM process itself is more important than the formal structure--so what you say to your employees is more critical to engaging them than whether they receive a report each year. Makes intuitive sense, but many organizations assume that because everyone receives an annual appraisal, their performance management system must be working!
- The drawbacks inherent in doing internet searches of potential applicants (e.g., finding inaccurate or misleading information, gathering information that could be potentially discriminatory)
- Realistic job previews (of which I'm a huge fan)--these can take the form of videos, classes (on-line or otherwise), or simply a better description of the job. MSPB gives the example that they provide applicants with a list of "This job might be for you if..." factors along side "This job might not be for you if..."
I've found "willingness" or "pre-screening" questionnaires to be helpful as well, which simply have candidates answer a series of questions related to the screening process (e.g, "Are you willing to have your credit reports checked?") or the job itself (e.g., "Are you willing to come in contact with toxic chemicals on a daily basis?")
- Performance management and employee engagement. MSPB studies have shown that the PM process itself is more important than the formal structure--so what you say to your employees is more critical to engaging them than whether they receive a report each year. Makes intuitive sense, but many organizations assume that because everyone receives an annual appraisal, their performance management system must be working!
Wednesday, October 01, 2008
What the new ADA law means for recruitment and assessment
On September 25, 2008, President Bush signed the Americans with Disabilities Act Amendments Act (ADAAA) or Senate Bill 3406 (actual text of which can be found here).
The changes the ADAAA entails have been well covered, especially over at George's Employment Blawg. The purpose of this post isn't to give a detailed analysis of the changes, but rather highlight what it means for recruiting and hiring.
First, a brief reminder of who is covered by the ADA. It covers not only those who have a substantially disability that limits a major life activity, but those who have a record of such a disability, and those who are REGARDED as having a disability (more about this in a second). Check out this information from the EEOC for a brief overview.
Given that introduction, there are two big things we all need to be aware of when it comes to the new bill:
1) Who is considered "disabled" under the ADA has just been increased dramatically. The ADAAA explicitly rejects several U.S. Supreme Court cases made in the last 10 years that narrowed who was considered disabled, and restores the broad scope. One big way they did this was by including those whose disability can be mitigated (e.g., controlling diabetes).
2) Individuals who claim injury because they were "regarded" as having a disability have a much easier time qualifying under that category.
So what does this mean? It doesn't fundamentally change anything we do when recruiting or assessing candidates. But it does mean this: recruiters and hiring managers have to be even more careful about making assumptions when sizing up candidates. Because more people are now covered by the ADA, our potential risk has expanded.
And because of the expansion of who is covered, particularly in the "regarded" category, recruiters and hiring managers should receive explicit training not to assume someone is disabled and therefore can't do the job. Their inquiry (if they must make one) should be limited to this question: "Can you perform the essential functions of the job, with or without reasonable accommodation?"
That's the only question they should ask on this topic, and they should ask it of all candidates. Of course they should also be prepared to answer this question: "Well I'm not sure. What are the essential functions?" (another great use for job analysis)
So bottom line: big change to the law, relatively small change to the way we do business. But a good reminder of our responsibilities.
p.s. other good reads regarding ADAAA include this article and this essay
The changes the ADAAA entails have been well covered, especially over at George's Employment Blawg. The purpose of this post isn't to give a detailed analysis of the changes, but rather highlight what it means for recruiting and hiring.
First, a brief reminder of who is covered by the ADA. It covers not only those who have a substantially disability that limits a major life activity, but those who have a record of such a disability, and those who are REGARDED as having a disability (more about this in a second). Check out this information from the EEOC for a brief overview.
Given that introduction, there are two big things we all need to be aware of when it comes to the new bill:
1) Who is considered "disabled" under the ADA has just been increased dramatically. The ADAAA explicitly rejects several U.S. Supreme Court cases made in the last 10 years that narrowed who was considered disabled, and restores the broad scope. One big way they did this was by including those whose disability can be mitigated (e.g., controlling diabetes).
2) Individuals who claim injury because they were "regarded" as having a disability have a much easier time qualifying under that category.
So what does this mean? It doesn't fundamentally change anything we do when recruiting or assessing candidates. But it does mean this: recruiters and hiring managers have to be even more careful about making assumptions when sizing up candidates. Because more people are now covered by the ADA, our potential risk has expanded.
And because of the expansion of who is covered, particularly in the "regarded" category, recruiters and hiring managers should receive explicit training not to assume someone is disabled and therefore can't do the job. Their inquiry (if they must make one) should be limited to this question: "Can you perform the essential functions of the job, with or without reasonable accommodation?"
That's the only question they should ask on this topic, and they should ask it of all candidates. Of course they should also be prepared to answer this question: "Well I'm not sure. What are the essential functions?" (another great use for job analysis)
So bottom line: big change to the law, relatively small change to the way we do business. But a good reminder of our responsibilities.
p.s. other good reads regarding ADAAA include this article and this essay
Friday, September 26, 2008
Employment testing: Imperfect but still invaluable
Imagine the following scenario. A detailed study is performed on a job. Observations of incumbents are made. Discussions are held with subject matter experts. Survey data are collected. The results all indicate that this job requires a high level of intelligence, extraversion and conscientiousness, customer service skill, and a fairly advanced ability to use computers. These attributes are, by far, the most important in predicting success on the job.
Taking all this information, we construct a rigorous assessment process. Candidates spend an entire day being observed as they take well-constructed ability and personality tests, participate in several scenarios that require them to demonstrate how they handle customer service situations, and are asked to produce several products on the computer using a variety of software packages. Raters are experts in the field, and the job, and use objective, behaviorally-anchored rating scales.
The results are then combined for an overall score, based on a formula generated from the initial study of the job. Those with the highest score are hired.
Given all the test scores, what is the maximum percentage of subsequent job performance that we can expect to predict?
a) 10%
b) 25%
c) 50%
d) 75%
Those of you in the field of personnel assessment can see this coming. Those of you that aren't, which option did you select? If you chose (b), move to the head of the class.
Why only 25%? Because tests aren't perfect, and because so much more than individual competencies go into predicting job performance. In fact, 10% is a much more commonly observed statistic.
25% doesn't sound like a lot. Does this mean we shouldn't use tests when hiring? Absolutely not. Because without well-developed assessments, you can expect to predict nothing--zero percent, nada.
In an article in the September 2008 issue of Industrial and Organizational Psychology, Scott Highhouse addresses one result of this low percentage: many hiring managers and HR professionals persist on using their own intuition and judgment using things like informal interviews to supplement (or replace) tests because of a stubborn belief that this will dramatically increase the likelihood of hiring the right candidate. The problem? They're wrong.
The available research makes clear that unstructured, overly subjective techniques such as resume review and informal interviews are simply not very predictive of job performance. So why do people keep using them? Highhouse suggests a key problem is our addiction to our own judgment. Commentators on the article point out other factors, such as lack of feedback, evolution, and different criteria of interest (e.g., hiring managers often care more about getting someone quickly rather than getting the "best" candidate).
In my experience the primary reason why decision makers insist on using less valid assessment methods comes down to ego, in two ways. One, people simply have a hard time admitting that tests do a better job of predicting things than they do. Second, we have an innate need to be involved in decisions impacting our lives. Who amongst us is willing to hire someone, unseen and unheard, based solely on test results? Even the most die-hard assessment fanatics among us have a difficult time with it, even though we know it would probably be for the best!
All in all, a highly recommended and provocative article that gets to one of the biggest challenges in personnel assessment: how assessment professionals and hiring managers can work together to find the right person for the job.
Taking all this information, we construct a rigorous assessment process. Candidates spend an entire day being observed as they take well-constructed ability and personality tests, participate in several scenarios that require them to demonstrate how they handle customer service situations, and are asked to produce several products on the computer using a variety of software packages. Raters are experts in the field, and the job, and use objective, behaviorally-anchored rating scales.
The results are then combined for an overall score, based on a formula generated from the initial study of the job. Those with the highest score are hired.
Given all the test scores, what is the maximum percentage of subsequent job performance that we can expect to predict?
a) 10%
b) 25%
c) 50%
d) 75%
Those of you in the field of personnel assessment can see this coming. Those of you that aren't, which option did you select? If you chose (b), move to the head of the class.
Why only 25%? Because tests aren't perfect, and because so much more than individual competencies go into predicting job performance. In fact, 10% is a much more commonly observed statistic.
25% doesn't sound like a lot. Does this mean we shouldn't use tests when hiring? Absolutely not. Because without well-developed assessments, you can expect to predict nothing--zero percent, nada.
In an article in the September 2008 issue of Industrial and Organizational Psychology, Scott Highhouse addresses one result of this low percentage: many hiring managers and HR professionals persist on using their own intuition and judgment using things like informal interviews to supplement (or replace) tests because of a stubborn belief that this will dramatically increase the likelihood of hiring the right candidate. The problem? They're wrong.
The available research makes clear that unstructured, overly subjective techniques such as resume review and informal interviews are simply not very predictive of job performance. So why do people keep using them? Highhouse suggests a key problem is our addiction to our own judgment. Commentators on the article point out other factors, such as lack of feedback, evolution, and different criteria of interest (e.g., hiring managers often care more about getting someone quickly rather than getting the "best" candidate).
In my experience the primary reason why decision makers insist on using less valid assessment methods comes down to ego, in two ways. One, people simply have a hard time admitting that tests do a better job of predicting things than they do. Second, we have an innate need to be involved in decisions impacting our lives. Who amongst us is willing to hire someone, unseen and unheard, based solely on test results? Even the most die-hard assessment fanatics among us have a difficult time with it, even though we know it would probably be for the best!
All in all, a highly recommended and provocative article that gets to one of the biggest challenges in personnel assessment: how assessment professionals and hiring managers can work together to find the right person for the job.
Wednesday, September 24, 2008
The best recruitment and retention tool: That touchy-feely stuff
Everyone who's anyone in the HR circles these days knows that to have a "seat at the table", be taken seriously, and avoid criticism, we need to ditch our reputation as touchy feely "people" people and focus on ROI, adding value, walking the walk, metrics, and all those other magical things that will somehow convince executives that we're worth our salaries. Right?
Here's the funny thing. Probably the biggest single factor in attracting and keeping the right people is your reputation--what people say about your organization. Referrals are time and time again cited as the most valuable source of high quality candidates (although we desperately need more non-survey research into this issue). And whose word carries the most weight? Current employees. So where should our focus be? Yep, you guessed it, our current employees.
So what does that mean? All that touchy feely stuff like job satisfaction surveys and wellness programs help ensure that your employees are happy (or as happy as you can make them). Happy employees are not just more likely to leave your customers with a positive impression, they're more likely to sing your praises to potential applicants. This positive reputation, combined with salaries that are at least in the ballpark, and clear communication about expectations and rewards, will go a long way toward your future success in attracting talent.
Here's the funny thing. Probably the biggest single factor in attracting and keeping the right people is your reputation--what people say about your organization. Referrals are time and time again cited as the most valuable source of high quality candidates (although we desperately need more non-survey research into this issue). And whose word carries the most weight? Current employees. So where should our focus be? Yep, you guessed it, our current employees.
So what does that mean? All that touchy feely stuff like job satisfaction surveys and wellness programs help ensure that your employees are happy (or as happy as you can make them). Happy employees are not just more likely to leave your customers with a positive impression, they're more likely to sing your praises to potential applicants. This positive reputation, combined with salaries that are at least in the ballpark, and clear communication about expectations and rewards, will go a long way toward your future success in attracting talent.
Thursday, September 18, 2008
Using video games to recruit and select candidates
A new study by the Pew Internet and American Life Project found that:
"virtually all American teens play computer, console, or cell phone games and...the gaming experience is rich and varied, with a significant amount of social interaction and potential for civic engagement."
and
"Game playing is universal, with almost all teens playing games and at least half playing games on a given day"
This raises a question:
Is there a benefit, or even a mandate, to make recruitment and assessment more like a video game?
We've already seen a massive amount of interest in using virtual worlds like Second Life for recruiting (which has met with mixed success). And the U.S. Army is always on the cutting edge with things like America's Army (which has enjoyed quite a bit of success).
When it comes to assessment, we've seen some valiant efforts, such as the virtual job tryout. And video-based testing has been around for a long time.
But with everything that's out there, would you describe your candidate experience as "rich and varied" with a "significant amount of social interaction"?
Laying aside for the moment the fact that many organizations lack even realistic job preview videos, what competitive advantage is to be gained by the employer that figures out how to make its recruitment and selection process interactive? What if instead of the process being a one-way street (candidates search for information about employers, employers try to figure candidates out), it was a two-way simultaneous sharing of information?
Doom came out 15 years ago. The Sims, 8 years ago. Isn't it time we developed realistic 3-dimensional worlds that allow candidates to make real-time branching decisions and learn about a potential employer, while we measure things like attention to detail and judgment?
Is it just me or are we missing an enormous opportunity to attract a new generation of workers and gather valuable competency information at the same time?
"virtually all American teens play computer, console, or cell phone games and...the gaming experience is rich and varied, with a significant amount of social interaction and potential for civic engagement."
and
"Game playing is universal, with almost all teens playing games and at least half playing games on a given day"
This raises a question:
Is there a benefit, or even a mandate, to make recruitment and assessment more like a video game?
We've already seen a massive amount of interest in using virtual worlds like Second Life for recruiting (which has met with mixed success). And the U.S. Army is always on the cutting edge with things like America's Army (which has enjoyed quite a bit of success).
When it comes to assessment, we've seen some valiant efforts, such as the virtual job tryout. And video-based testing has been around for a long time.
But with everything that's out there, would you describe your candidate experience as "rich and varied" with a "significant amount of social interaction"?
Laying aside for the moment the fact that many organizations lack even realistic job preview videos, what competitive advantage is to be gained by the employer that figures out how to make its recruitment and selection process interactive? What if instead of the process being a one-way street (candidates search for information about employers, employers try to figure candidates out), it was a two-way simultaneous sharing of information?
Doom came out 15 years ago. The Sims, 8 years ago. Isn't it time we developed realistic 3-dimensional worlds that allow candidates to make real-time branching decisions and learn about a potential employer, while we measure things like attention to detail and judgment?
Is it just me or are we missing an enormous opportunity to attract a new generation of workers and gather valuable competency information at the same time?
Sunday, September 14, 2008
What to look for in a leader
With the upcoming U.S. Presidential election, my co-workers and I have been spending a lot of time talking politics. We're of different political parties, so things can get pretty interesting.
Debates aside, we're all assessment professionals, so we keep coming to the same conclusion: it's difficult to figure out who would be the better president without an accurate job analysis. Without knowing what it takes to be good at a job, it's almost impossible to predict who would best fill it.
This topic isn't a new one for me; I wrote previously about whether "experience matters" in a leader, and how you would actually go about hiring one. But in a way that's putting the cart before the horse. Before we look at actual ways of finding a leader, let's look at what it takes to be one.
Of course many, many people have written about leadership and the KSAs or competencies it takes to be good at it, including respected voices in the assessment world. But new data is always welcome, which is why I was pleased to see a recent analysis by Doris Kearns Goodwin, author of Team of Rivals: The Political Genius of Abraham Lincoln, and one of the most thoughtful authors about the subject today.
In her recent article she outlines 10 attributes that she believes distinguishes "truly great Presidents." Using Abraham Lincoln and Franklin Roosevelt as exemplars (identifying and agreeing on high performers is a particular challenge in this case), she listed the following. Think about how these relate to leadership in your organization:
1. The courage to stay strong--to survive in the face of adversity and motivate oneself even when frustrated.
2. Self-confidence--the ability to surround oneself with experts, regardless of whether they agree with you or not.
3. An ability to learn from errors--to acknowledge and grow from mistakes rather than continue down a road of failure.
4. A willingness to change--not just by getting elected (or hired) but by going against previous tendences and preferences when situations require it.
5. Emotional intelligence--the ability to encourage others, take blame, and help others play to their strengths.
6. Self-control--either not letting events upset you, or pausing before responding when they do.
7. A popular touch--an awareness of where the citizens (employees) are and what they are (and aren't) ready for.
8. A moral compass--doing what's right, not just what's politically expedient or popular.
9. A capacity to relax--this includes the ability to diffuse your own emotions as well as relax those around you through tools like humor.
10. A gift for inspiring others--the ability to communicate broad goals shaped by history and context, shift opinions, and motivate.
So given these qualities, here are some things to think about:
- Are these attributes generalizable to leaders in your organization?
- Are these the types of things you're looking for when recruiting/hiring leaders?
- If so, are you assessing them in the right way?
- If you're a leader, do you embody these qualities?
Debates aside, we're all assessment professionals, so we keep coming to the same conclusion: it's difficult to figure out who would be the better president without an accurate job analysis. Without knowing what it takes to be good at a job, it's almost impossible to predict who would best fill it.
This topic isn't a new one for me; I wrote previously about whether "experience matters" in a leader, and how you would actually go about hiring one. But in a way that's putting the cart before the horse. Before we look at actual ways of finding a leader, let's look at what it takes to be one.
Of course many, many people have written about leadership and the KSAs or competencies it takes to be good at it, including respected voices in the assessment world. But new data is always welcome, which is why I was pleased to see a recent analysis by Doris Kearns Goodwin, author of Team of Rivals: The Political Genius of Abraham Lincoln, and one of the most thoughtful authors about the subject today.
In her recent article she outlines 10 attributes that she believes distinguishes "truly great Presidents." Using Abraham Lincoln and Franklin Roosevelt as exemplars (identifying and agreeing on high performers is a particular challenge in this case), she listed the following. Think about how these relate to leadership in your organization:
1. The courage to stay strong--to survive in the face of adversity and motivate oneself even when frustrated.
2. Self-confidence--the ability to surround oneself with experts, regardless of whether they agree with you or not.
3. An ability to learn from errors--to acknowledge and grow from mistakes rather than continue down a road of failure.
4. A willingness to change--not just by getting elected (or hired) but by going against previous tendences and preferences when situations require it.
5. Emotional intelligence--the ability to encourage others, take blame, and help others play to their strengths.
6. Self-control--either not letting events upset you, or pausing before responding when they do.
7. A popular touch--an awareness of where the citizens (employees) are and what they are (and aren't) ready for.
8. A moral compass--doing what's right, not just what's politically expedient or popular.
9. A capacity to relax--this includes the ability to diffuse your own emotions as well as relax those around you through tools like humor.
10. A gift for inspiring others--the ability to communicate broad goals shaped by history and context, shift opinions, and motivate.
So given these qualities, here are some things to think about:
- Are these attributes generalizable to leaders in your organization?
- Are these the types of things you're looking for when recruiting/hiring leaders?
- If so, are you assessing them in the right way?
- If you're a leader, do you embody these qualities?
Thursday, September 11, 2008
HR goes to jail
It's extremely unusual for human resources employees to face personal liability for violating laws relating to recruitment and selection (constitional claims being one of the rare exceptions). It's even more rare for there to be criminal penalties associated with hiring; no one's going to jail for not showing a .30 criterion-related validity coefficient or for surfing Facebook for people under 40. But when it comes to immigration violations and child labor, the law doesn't mess around.
Case in point: on Tuesday, September 9th, two HR employees at Agriprocessors, a meatpacking facility in Postille, Iowa, were named in court documents and face both misdemeanor and felony charges for violating child labor and immigration laws.
Agriprocessors has been in the news for months ever since a May immigration raid found nearly 400 workers were in the country illegally. But up until now none of the management or HR employees had been in trouble.
The HR employees face charges for helping to hire children under 18 into dangerous jobs and helping applicants obtain identification using false documents. The misdemeanor charges associated with state child labor law violations carry a penalty of 30 days in jail and a fine up to $625.
The felony charges associated with immigration law violations, on the other hand, carry a prison term of 2-22 years and a $750,000 fine for one of the HR employees, and up to five years and $250,00 for the other if they are convicted. One of the more egregious charges was helping employees complete new paperwork using new names and false documents one day before the raid.
Sort of puts those large discrimination settlements into context, doesn't it?
Case in point: on Tuesday, September 9th, two HR employees at Agriprocessors, a meatpacking facility in Postille, Iowa, were named in court documents and face both misdemeanor and felony charges for violating child labor and immigration laws.
Agriprocessors has been in the news for months ever since a May immigration raid found nearly 400 workers were in the country illegally. But up until now none of the management or HR employees had been in trouble.
The HR employees face charges for helping to hire children under 18 into dangerous jobs and helping applicants obtain identification using false documents. The misdemeanor charges associated with state child labor law violations carry a penalty of 30 days in jail and a fine up to $625.
The felony charges associated with immigration law violations, on the other hand, carry a prison term of 2-22 years and a $750,000 fine for one of the HR employees, and up to five years and $250,00 for the other if they are convicted. One of the more egregious charges was helping employees complete new paperwork using new names and false documents one day before the raid.
Sort of puts those large discrimination settlements into context, doesn't it?
Monday, September 08, 2008
One man's journey into Talent Management software
Several months ago I began looking into "talent management" software. To be honest, I wasn't sure exactly what I was looking for--or at--but I learned quite a bit along the way. In this post I'll give you a glimpse into my learning experience.
I began my journey looking simply for something that would allow my organization to inventory our existing talent--for example, by allowing employees to describe their own competencies and work preferences. My interest was piqued by this article in Workforce Management, which does a very good job of describing some of the major players and their products (they had me at "baseball card-style interface").
The first thing I learned was that there isn't a clear definition of what talent management (or "TM") is in the first place. For argument's sake, let's define it as documentation, analysis, planning, and decision making regarding how competencies are brought into and used within an organization.
Now I know many of you out there are thinking, "you're JUST NOW looking at TM products?" Well, yes, and I'll blame the fact that I work in the public sector and we tend to be a bit tardy to the game. But I actually think that's a good thing in this case, because of where the product development is right now.
Turns out the "talent inventory" function is only a small part of TM product offerings and is usually included in the career management portion. But what I have come to believe is this is one of the less important features of this type of software. What may be much more important is the performance management component.
Now, I know this is a blog on recruitment and assessment, so you may raise an eyebrow when I mention performance management, but bear with me. If we think broadly about talent management (i.e., it includes everything from branding to employee exit), how we place individuals and assess their performance is not only key to organizational success, but is the ultimate indicator of how successful our recruitment and assessment methods are.
So what's out there? Quite a few offerings, actually (it's a rapidly growing field). I looked at several products with my primary criteria being usability--if supervisors aren't going to use it, it's worthless. And, it turns out many of them have similar functionality--even similar cost--so it really came down to picking something that looked attractive and useful.
So, after watching many demos, and talking to many sales reps, here in roughly descending order of usability, are the companies whose products I looked at:
Sonar 6
SuccessFactors
CornerstoneOnDemand
Authoria
Taleo
Kenexa
This list is somewhat deceiving because all of these products tend to look the same and have similar abilities--except one. For me, only one of these products stands out in terms of design, and that's Sonar6. Why? White space, white space, white space. Simple but attractive graphics and GUI. Lack of 400 stemming text menus. I encourage you to check out the demo videos on all of their websites, but I think you will agree with me that their interface is simpler, more graphical, and more engaging.
Am I going to be purchasing one of these products? Hard to say at this point. But I will certainly keep you posted. So if you're interested in these products, take your time, ask a lot of questions, and think about how this will fit with your culture. And by the way, if you're as new to this field as I was, I strongly recommend the resources over at Bersin and Associates.
I began my journey looking simply for something that would allow my organization to inventory our existing talent--for example, by allowing employees to describe their own competencies and work preferences. My interest was piqued by this article in Workforce Management, which does a very good job of describing some of the major players and their products (they had me at "baseball card-style interface").
The first thing I learned was that there isn't a clear definition of what talent management (or "TM") is in the first place. For argument's sake, let's define it as documentation, analysis, planning, and decision making regarding how competencies are brought into and used within an organization.
Now I know many of you out there are thinking, "you're JUST NOW looking at TM products?" Well, yes, and I'll blame the fact that I work in the public sector and we tend to be a bit tardy to the game. But I actually think that's a good thing in this case, because of where the product development is right now.
Turns out the "talent inventory" function is only a small part of TM product offerings and is usually included in the career management portion. But what I have come to believe is this is one of the less important features of this type of software. What may be much more important is the performance management component.
Now, I know this is a blog on recruitment and assessment, so you may raise an eyebrow when I mention performance management, but bear with me. If we think broadly about talent management (i.e., it includes everything from branding to employee exit), how we place individuals and assess their performance is not only key to organizational success, but is the ultimate indicator of how successful our recruitment and assessment methods are.
So what's out there? Quite a few offerings, actually (it's a rapidly growing field). I looked at several products with my primary criteria being usability--if supervisors aren't going to use it, it's worthless. And, it turns out many of them have similar functionality--even similar cost--so it really came down to picking something that looked attractive and useful.
So, after watching many demos, and talking to many sales reps, here in roughly descending order of usability, are the companies whose products I looked at:
Sonar 6
SuccessFactors
CornerstoneOnDemand
Authoria
Taleo
Kenexa
This list is somewhat deceiving because all of these products tend to look the same and have similar abilities--except one. For me, only one of these products stands out in terms of design, and that's Sonar6. Why? White space, white space, white space. Simple but attractive graphics and GUI. Lack of 400 stemming text menus. I encourage you to check out the demo videos on all of their websites, but I think you will agree with me that their interface is simpler, more graphical, and more engaging.
Am I going to be purchasing one of these products? Hard to say at this point. But I will certainly keep you posted. So if you're interested in these products, take your time, ask a lot of questions, and think about how this will fit with your culture. And by the way, if you're as new to this field as I was, I strongly recommend the resources over at Bersin and Associates.
Thursday, September 04, 2008
Free webinar on adverse impact
My colleague Dr. Jim Higgins shares my passion for providing education on topics relating to recruitment, testing, and selection.
So I'm pleased to draw your attention to an upcoming free webinar he's offering on Understanding Adverse Impact in Testing, Selection, Promotion, and Staff Reductions.
From the registration page:
"This free webinar will help you better understand the history, present and future of adverse impact analysis and will aid in your efforts to ensure that your organization takes the steps necessary to protect itself from claims of discrimination. It will also help you ensure that your organization’s hiring and promotional practices are maximally compliant with the letter and spirit of EEO laws and regulations."
The webinar takes place on September 16th at 11am and it's only 45 minutes. If you like what you see/hear, Jim offers a free (yes, free) 9-session course on basic applied statistics.
So I'm pleased to draw your attention to an upcoming free webinar he's offering on Understanding Adverse Impact in Testing, Selection, Promotion, and Staff Reductions.
From the registration page:
"This free webinar will help you better understand the history, present and future of adverse impact analysis and will aid in your efforts to ensure that your organization takes the steps necessary to protect itself from claims of discrimination. It will also help you ensure that your organization’s hiring and promotional practices are maximally compliant with the letter and spirit of EEO laws and regulations."
The webinar takes place on September 16th at 11am and it's only 45 minutes. If you like what you see/hear, Jim offers a free (yes, free) 9-session course on basic applied statistics.
Tuesday, September 02, 2008
Power v. Group Differences
In a recent post I wrote about a chart my co-workers and I created to help us communicate with hiring supervisors about the pros and cons of various testing instruments. That graph mapped power (validity) on one axis, and speed of administration on the other.
One of the comments on that post mentioned it would be nice to see power vs. group differences. I agreed. So here it is!
The bottom line on this graph (no pun intended), if you're looking for the best combination of both, will be in the upper left quadrant.
A few notes of notes of caution before interpreting the graph:
- this graph charts only Black-White differences, which is the largest data set we have. It's important to remember that combinations of other groups (including gender) will yield slightly different results.
- the evidence on group differences for T&Es is rather scant. Not much has been found, but that doesn't mean it couldn't in the future, depending on what specific training or experience is being measured.
- finally, as the excellent recent article by Roth, et al. reminds us, adverse impact in your selection process depends on several factors, including the specific test or construct, the selection ratio, your applicant pool, and the order you place your assessments in.
One of the comments on that post mentioned it would be nice to see power vs. group differences. I agreed. So here it is!
The bottom line on this graph (no pun intended), if you're looking for the best combination of both, will be in the upper left quadrant.
A few notes of notes of caution before interpreting the graph:
- this graph charts only Black-White differences, which is the largest data set we have. It's important to remember that combinations of other groups (including gender) will yield slightly different results.
- the evidence on group differences for T&Es is rather scant. Not much has been found, but that doesn't mean it couldn't in the future, depending on what specific training or experience is being measured.
- finally, as the excellent recent article by Roth, et al. reminds us, adverse impact in your selection process depends on several factors, including the specific test or construct, the selection ratio, your applicant pool, and the order you place your assessments in.
Thursday, August 28, 2008
Real world value of strategic HRM practices
In my last post I talked about two articles from the most recent issue of Personnel Psychology (August, 2008) that had to do with adverse impact.
In today's post I'd like to talk about three of the other articles in that issue that all have to do with strategic human resource management (SHRM; not that SHRM) practices and their bottom-line impact. These studies don't directly reflect recruitment or selection practices but will interest anyone with a broader interest in HR.
The first study (by Birdi, et al.) compared several common SHRM practices (empowerment, training, and teamwork) with several more operational-style initiatives (TQM, JIT, AMT, and SCP) and looked at their effect on company productivity. The authors had access to a great database that included data from 308 companies over 22 years (!).
So what did they find? Out of the SHRM practices and the operational-style initiatives, only two--both SHRM practices--had significant effects on productivity. Specifically, the effects of employee empowerment and extensive training represented a gain of approximately 7% and 6%, respectively, in terms of value added per employee. Interestingly, it took both empowerment and training a couple years to impact productivity.
The second study by Nishii et al. looked at the attributions employees make about the reasons why management adopts certain HR practices and how these impact attitudes, organizational citizenship behaviors (OCBs) and customer satisfaction. Data from over 5,000 employees of a supermarket chain were analyzed.
So what did they find? A significant and positive correlation between employee attitudes and an attribution to the employer that the adoption of HR practices was based on a concern for customer service quality or employee well-being. In turn, employee attitudes were significantly (although less so) positively correlated with OCBs. Finally, the OCB of helping behavior was significantly correlated with customer satisfaction. In other words, when employees felt HR practices were implemented with an eye toward improving service quality or their own well-being, this improved their attitudes, which in turn increased the likelihood they would demonstrate helping behavior toward coworkers, which increased customer satisfaction. On the other hand, when employees attributed HR practices to keeping costs down, getting the most work out of employees, or complying with union requirements, there was no impact on employee attitudes.
The third study looked at how changes in team leadership impacted customer satisfaction in branches of a regional bank. Walker et al. examined data from 68 branch managers over a four-year period. The authors performed two tests--one of the mean differences between time periods and a residual analysis.
Why does that matter? Because the first type of test (called a t-test) simply looked at whether managers improved their team leadership scores and whether customer satisfaction ratings, on average, went up during that period. The answer to these questions was no. But the residual analysis looked at whether specific managers who improved (or worsened) their team leadership scores saw parallel improvement (or declines) in customer satisfaction ratings. The answer to THAT question was yes--in two of the three time periods (r=.21 and .31, respectively).
So what does all this mean? These studies certainly suggest that strategic HRM initiatives such as empowerment, communication, and extensive training (for both leaders and subordinates) can have significant, practical impacts on outcomes important to the organization.
In today's post I'd like to talk about three of the other articles in that issue that all have to do with strategic human resource management (SHRM; not that SHRM) practices and their bottom-line impact. These studies don't directly reflect recruitment or selection practices but will interest anyone with a broader interest in HR.
The first study (by Birdi, et al.) compared several common SHRM practices (empowerment, training, and teamwork) with several more operational-style initiatives (TQM, JIT, AMT, and SCP) and looked at their effect on company productivity. The authors had access to a great database that included data from 308 companies over 22 years (!).
So what did they find? Out of the SHRM practices and the operational-style initiatives, only two--both SHRM practices--had significant effects on productivity. Specifically, the effects of employee empowerment and extensive training represented a gain of approximately 7% and 6%, respectively, in terms of value added per employee. Interestingly, it took both empowerment and training a couple years to impact productivity.
The second study by Nishii et al. looked at the attributions employees make about the reasons why management adopts certain HR practices and how these impact attitudes, organizational citizenship behaviors (OCBs) and customer satisfaction. Data from over 5,000 employees of a supermarket chain were analyzed.
So what did they find? A significant and positive correlation between employee attitudes and an attribution to the employer that the adoption of HR practices was based on a concern for customer service quality or employee well-being. In turn, employee attitudes were significantly (although less so) positively correlated with OCBs. Finally, the OCB of helping behavior was significantly correlated with customer satisfaction. In other words, when employees felt HR practices were implemented with an eye toward improving service quality or their own well-being, this improved their attitudes, which in turn increased the likelihood they would demonstrate helping behavior toward coworkers, which increased customer satisfaction. On the other hand, when employees attributed HR practices to keeping costs down, getting the most work out of employees, or complying with union requirements, there was no impact on employee attitudes.
The third study looked at how changes in team leadership impacted customer satisfaction in branches of a regional bank. Walker et al. examined data from 68 branch managers over a four-year period. The authors performed two tests--one of the mean differences between time periods and a residual analysis.
Why does that matter? Because the first type of test (called a t-test) simply looked at whether managers improved their team leadership scores and whether customer satisfaction ratings, on average, went up during that period. The answer to these questions was no. But the residual analysis looked at whether specific managers who improved (or worsened) their team leadership scores saw parallel improvement (or declines) in customer satisfaction ratings. The answer to THAT question was yes--in two of the three time periods (r=.21 and .31, respectively).
So what does all this mean? These studies certainly suggest that strategic HRM initiatives such as empowerment, communication, and extensive training (for both leaders and subordinates) can have significant, practical impacts on outcomes important to the organization.
Monday, August 25, 2008
Adverse impact on personality and work sample tests
The latest issue (Autumn, 2008) of Personnel Psychology has so much good stuff in it that I'm going to split it into two parts.
The first part, which I'll do today, focuses on the more selection-oriented articles which have to do with adverse impact on personality tests and in work sample exercises. In my next post I'll talk about three more articles that have to do with strategic HRM practices.
Today let's talk about adverse impact. It's a persistent dilemma, particularly given many employers' desire to promote diversity and the legal consequences of failing to avoid it. One of the "holy grails" of employee assessment is finding a tool that is generally valid, inexpensive to implement, and does not result in large amounts of adverse impact.
One type of instrument that has been suggested as fitting these criteria is the personality test. They're easy to administer and can be valid predictors of performance, but our knowledge of group differences has up until now been limited. In this issue of Personnel Psych, Foldes, Duehr, and Ones present meta-analytic evidence that attempts to fill in the blanks.
Their study of Big 5 personality factors and facets is based on over 700 effect sizes. So what did they find? There is definitely value to separating the factors from the facets, as they show different levels of group difference. And most of the group differences (in cases with decent sample sizes) were small to moderate. Here are some of the largest and most robust findings (e.g., 90% confidence interval does not include zero):
- Whites scored higher than Asians on even-temperedness (an aspect of emotional stability; d=.38)
- Hispanics scored higher than Whites on self-esteem (an aspect of emotional stability; d=.25)
- Blacks outscored Asians on global measures of emotional stability (d=.58)
- Blacks outscored Asians on global measures of extraversion (d=.41)
- Hispanics outscored Blacks on sociability (d=.30)
The article includes a very useful chart that summarizes the findings and includes indications of when adverse impact may occur given certain selection ratios. What I take away from all this is the classic racial discrimination situation employers are worried about in the U.S. (Whites scoring higher than another group) is less of a concern with personality tests than with, say, cognitive ability tests. But (and this is a big but), it doesn't take much group difference to result in adverse impact (see Sackett & Ellingson, 1997)
The second article is also about group differences. This time it's work sample tests and it's a meta-analysis of Black-White differences by Roth, Bobko, McFarland, and Buster.
The authors analyzed 40 effect sizes in their quest to dig further into this subject--and it's a good thing they did. A group difference (d) benchmark often cited for these exercises is .38 in favor of Whites. These authors obtained a value of .73, but with an important caveat--this value depends greatly on the particular work sample test.
For example, in-basket and technical exercises (e.g., reading a construction map) yielded d values of .74 .76, respectively. On the lower end, oral briefings and role-plays had d values of .22 and .21, respectively. Scheduling exercises were in the middle at d=.52.
Why the difference? The authors provide data that indicates the more saturated with cognitive ability/job knowledge the measure, the higher the d values. The more the exercise requires demonstrating social skills, the lower the d values.
Bottom line? Your choice of selection measure should always be based on the KSAs required per the job analysis. But given a choice between different exercises, consideration should be given to the group differences described above. Blindly selecting a work sample over, say, a cognitive ability test, may not yield the diversity dividends you anticipate (in addition to the fact that they may not be as predictive as we previously thought!).
Some important caveats should be noted about both of these pieces of research: (1) adverse impact is heavily dependent on factors other than group differences, such as applicant population, selection ratio, and stage in the selection process; and (2) from a legal perspective, adverse impact is only a problem if you don't have the validity evidence to back it up. Of course you should have this evidence anyway, because that's how you're deciding how to filter your candidates...right?
The first part, which I'll do today, focuses on the more selection-oriented articles which have to do with adverse impact on personality tests and in work sample exercises. In my next post I'll talk about three more articles that have to do with strategic HRM practices.
Today let's talk about adverse impact. It's a persistent dilemma, particularly given many employers' desire to promote diversity and the legal consequences of failing to avoid it. One of the "holy grails" of employee assessment is finding a tool that is generally valid, inexpensive to implement, and does not result in large amounts of adverse impact.
One type of instrument that has been suggested as fitting these criteria is the personality test. They're easy to administer and can be valid predictors of performance, but our knowledge of group differences has up until now been limited. In this issue of Personnel Psych, Foldes, Duehr, and Ones present meta-analytic evidence that attempts to fill in the blanks.
Their study of Big 5 personality factors and facets is based on over 700 effect sizes. So what did they find? There is definitely value to separating the factors from the facets, as they show different levels of group difference. And most of the group differences (in cases with decent sample sizes) were small to moderate. Here are some of the largest and most robust findings (e.g., 90% confidence interval does not include zero):
- Whites scored higher than Asians on even-temperedness (an aspect of emotional stability; d=.38)
- Hispanics scored higher than Whites on self-esteem (an aspect of emotional stability; d=.25)
- Blacks outscored Asians on global measures of emotional stability (d=.58)
- Blacks outscored Asians on global measures of extraversion (d=.41)
- Hispanics outscored Blacks on sociability (d=.30)
The article includes a very useful chart that summarizes the findings and includes indications of when adverse impact may occur given certain selection ratios. What I take away from all this is the classic racial discrimination situation employers are worried about in the U.S. (Whites scoring higher than another group) is less of a concern with personality tests than with, say, cognitive ability tests. But (and this is a big but), it doesn't take much group difference to result in adverse impact (see Sackett & Ellingson, 1997)
The second article is also about group differences. This time it's work sample tests and it's a meta-analysis of Black-White differences by Roth, Bobko, McFarland, and Buster.
The authors analyzed 40 effect sizes in their quest to dig further into this subject--and it's a good thing they did. A group difference (d) benchmark often cited for these exercises is .38 in favor of Whites. These authors obtained a value of .73, but with an important caveat--this value depends greatly on the particular work sample test.
For example, in-basket and technical exercises (e.g., reading a construction map) yielded d values of .74 .76, respectively. On the lower end, oral briefings and role-plays had d values of .22 and .21, respectively. Scheduling exercises were in the middle at d=.52.
Why the difference? The authors provide data that indicates the more saturated with cognitive ability/job knowledge the measure, the higher the d values. The more the exercise requires demonstrating social skills, the lower the d values.
Bottom line? Your choice of selection measure should always be based on the KSAs required per the job analysis. But given a choice between different exercises, consideration should be given to the group differences described above. Blindly selecting a work sample over, say, a cognitive ability test, may not yield the diversity dividends you anticipate (in addition to the fact that they may not be as predictive as we previously thought!).
Some important caveats should be noted about both of these pieces of research: (1) adverse impact is heavily dependent on factors other than group differences, such as applicant population, selection ratio, and stage in the selection process; and (2) from a legal perspective, adverse impact is only a problem if you don't have the validity evidence to back it up. Of course you should have this evidence anyway, because that's how you're deciding how to filter your candidates...right?
Friday, August 22, 2008
Tracking down the "Internet Applicant Rule"
When the OFCCP's Internet Applicant Recordkeeping Rule first came out it generated a lot of discussion.
You don't hear much about it now, even though it's one of the most important regulations that covered employers need to be concerned about when it comes to electronic recruiting. It specifies information that must be collected and retained about applicants and permissible screening criteria to filter down candidates.
Why the drop in popularity? Sure, it's not a new and sexy topic anymore. But another reason might be that the OFCCP doesn't make it easy to find information on the rule and they don't publicize it prominently anymore. There's no link to it on their homepage; the actual Federal Register rule is nowhere to be seen.
And the pretty-darn-helpful FAQs? Moved. Here. (granted it IS in the FAQ section)
Let's not forget about this particular regulation, as it impacts recruitment and selection (for those it applies to) just about as much as anything out there, including the Uniform Guidelines.
You don't hear much about it now, even though it's one of the most important regulations that covered employers need to be concerned about when it comes to electronic recruiting. It specifies information that must be collected and retained about applicants and permissible screening criteria to filter down candidates.
Why the drop in popularity? Sure, it's not a new and sexy topic anymore. But another reason might be that the OFCCP doesn't make it easy to find information on the rule and they don't publicize it prominently anymore. There's no link to it on their homepage; the actual Federal Register rule is nowhere to be seen.
And the pretty-darn-helpful FAQs? Moved. Here. (granted it IS in the FAQ section)
Let's not forget about this particular regulation, as it impacts recruitment and selection (for those it applies to) just about as much as anything out there, including the Uniform Guidelines.
Wednesday, August 20, 2008
9th Circuit decision good news for employers
On August 7, 2008, the 9th Circuit Court of Appeals joined many other Circuits in deciding that in cases involving constitutional discriminatory hiring claims, the accrual period begins when candidates find out they aren't hired, or when a reasonable person would have realized this. The case is Zolotarev v. San Francisco.
Okay, so let's back up a second...what's a discrimination claim under the constitution? What we're talking about here are claims filed under Title 42 (Chapter 21) of the U.S. Code, such as Sections 1981 and 1983. These cases are typically brought against private sector employers (although as this case makes obvious, not always), and are sometimes combined with claims under other, more common, statutes, such as Title VII.
Why would someone want to bring a claim under these Sections? Several reasons:
- Unlike Title VII, ADA, or ADEA, there are no administrative requirements--in other words someone can file directly in court rather than going through, say, the EEOC
- Unlike discrimination cases brought under other laws, there are no caps to compensatory and punitive damages (of course no punitive damages are available from public sector entities)
- Also unlike cases brought under other statutes, there can be individual liability in these cases--specific hiring supervisors and HR staff can be held liable (of course this is pretty rare and most folks are indemnified, but still, having your name in a lawsuit isn't much fun)
So what's accrual? The statute of limitations specifies how long plaintiffs have to file a suit. Accrual refers to when this period starts. So in California, where this case was filed, the statute of limitations for these types of cases is one year (reiterated in this decision). When that year starts is the take home from this case--according to the 9th Circuit, it starts when the plaintiffs found out they weren't hired, or when a reasonable person would have realized this. It does not start when they later suspect they were wronged.
This is in line with what many of the other Circuit courts have decided. So why is this good news for employers? Because it means these types of cases cannot be successfully brought longer than one year after candidates are informed they weren't chosen. Not only does this mean you can breathe a sigh of relief, it limits how long you need to retain your records (although you will want to check to see what other laws apply to you and how long their statute of limitations are).
Okay, so let's back up a second...what's a discrimination claim under the constitution? What we're talking about here are claims filed under Title 42 (Chapter 21) of the U.S. Code, such as Sections 1981 and 1983. These cases are typically brought against private sector employers (although as this case makes obvious, not always), and are sometimes combined with claims under other, more common, statutes, such as Title VII.
Why would someone want to bring a claim under these Sections? Several reasons:
- Unlike Title VII, ADA, or ADEA, there are no administrative requirements--in other words someone can file directly in court rather than going through, say, the EEOC
- Unlike discrimination cases brought under other laws, there are no caps to compensatory and punitive damages (of course no punitive damages are available from public sector entities)
- Also unlike cases brought under other statutes, there can be individual liability in these cases--specific hiring supervisors and HR staff can be held liable (of course this is pretty rare and most folks are indemnified, but still, having your name in a lawsuit isn't much fun)
So what's accrual? The statute of limitations specifies how long plaintiffs have to file a suit. Accrual refers to when this period starts. So in California, where this case was filed, the statute of limitations for these types of cases is one year (reiterated in this decision). When that year starts is the take home from this case--according to the 9th Circuit, it starts when the plaintiffs found out they weren't hired, or when a reasonable person would have realized this. It does not start when they later suspect they were wronged.
This is in line with what many of the other Circuit courts have decided. So why is this good news for employers? Because it means these types of cases cannot be successfully brought longer than one year after candidates are informed they weren't chosen. Not only does this mean you can breathe a sigh of relief, it limits how long you need to retain your records (although you will want to check to see what other laws apply to you and how long their statute of limitations are).
Saturday, August 16, 2008
Can the Dewey Color System be used for hiring?
Recent articles in ERE (and elsewhere) have pointed out that CareerBuilder recently integrated the Dewey Color System into their site.
What is the Dewey Color System? As you might guess, it's a brief "test" where you choose your preference between various colors. It then provides you with a report that purports to describe your personality.
Pro? It's easy. It's much more easily digestible to most people than something like a traditional Big 5 test (and this is certainly not the first more "user friendly" personality test to emerge).
Con? We're far from being able to recommend this as a selection or hiring tool.
Some problems I have:
1) the entire basis of research support (according to their website) is this single article.
2) correlations with the Strong Interest Inventory reported in this article aren't terrible, but aren't outstanding either (median of .68).
3) correlations with the 16PF, which actually is used for hiring, were worse--median correlation was .51 with a range of .33-.68.
4) the results in this article are based on a single sample in a single location--no generalizability here.
5) to their credit, the authors of the article point out "what we have not yet established is that the Dewey Color System Test also predicts the behaviors for which these personality tests are typically used. Thus, more extensive validation should consider using color preferences directly to predict variables such as job satisfaction, leadership potential, etc."
6) beware any testing instrument that is described as "valid" or "validated." Tests aren't validated. Interpretations of them are. Read the Principles, folks (if you must, skip to page 4).
Is it easy? Yup. Might there be something to this? Yup. Is this another example of the P.T. Barnum effect? Yup. Should we be very careful and conduct good research before using personality tests? Yup.
Are we at a point where we can say this should be used for personnel selection? Nope.
p.s. speaking of personality tests, did you know Hogan Assessment Systems has their own blog? I didn't until now--check it out.
Wednesday, August 13, 2008
Speed v. power
Part of my job is constantly trying to figure out how to communicate better with our customers (hiring supervisors). Discussions about validity and reliability may interest me, but it's a guaranteed recipe for blank stares from most people. So we think of other ways to talk about the pros and cons of tests.
This document is one attempt at communicating assessment research in layperson terms. It graphs power (validity) on the Y-axis and speed of administration on the X-axis. We could easily have chosen other criteria, such as adverse impact or applicant acceptance, but we felt when you get right down to it, these are the factors customers care about most.
So what do you see when you look at this graph? Do you think it communicates what we should be communicating? Have we over- or under-stated the case on any of the methods? Does this detract from basing the decision on job analysis?
This document is one attempt at communicating assessment research in layperson terms. It graphs power (validity) on the Y-axis and speed of administration on the X-axis. We could easily have chosen other criteria, such as adverse impact or applicant acceptance, but we felt when you get right down to it, these are the factors customers care about most.
So what do you see when you look at this graph? Do you think it communicates what we should be communicating? Have we over- or under-stated the case on any of the methods? Does this detract from basing the decision on job analysis?
Wednesday, August 06, 2008
Resumes? Applications? Or something in between?
A recent item in a newsletter published by ESR issued a recurring recommendation in HR: employers should use standard applications, not resumes. I'd like to take the opposite viewpoint. Well, not opposite, but, well...you'll see.
The newsletter contains many good reasons for requiring a standard application. For example, applicants often provide you with information you may not want (e.g., membership in advocacy organizations). Applicants also use the most positive spin possible, (over)emphasizing accomplishments and leaving employment gaps. In addition, applicants may not give you all of the information you require, such as dates and salaries.
These are good reasons for requiring a standard application over a resume. But let me play devil's advocate for a minute. Think about the modern candidate experience. In order to apply for a job, you have to oftentimes spend hours--days--searching through job boards and employer "career portals." If you're lucky enough to find a job that appears to be what you want (because of course employers' worst kept secret is they don't tell you the bad parts of a job), you have to complete a lengthy application (each time), or navigate your way through a "modern" applicant tracking system (read: GUI designed by IT).
Qualified candidates--who are hard to find in the first place--get fed up. They don't want to waste their time filling out applications or entering information into your ATS. They may just look for an opportunity that doesn't require them to describe their entire life experience. Hence the resume, which they already have on file and simply requires a quick update.
So how do we reconcile the needs of the employer, who are doing their best to make sure they get the information they need, and the employee, who is trying to efficiently provide the information? I see several solutions:
1) The employer accepts resumes but makes very clear what the resume should contain. No unexplained employment gaps. Salary must be included. Etc.
2) Employers and candidates take advantage of a standardized third-party site that many folks already use for networking purposes (e.g., LinkedIn), again making clear what the profile must contain.
3) Employers use an ATS that takes less than 10 minutes for an applicant to apply.
Or how about a combination? How about giving the candidate options. The candidate must "register" with the employer's ATS but all this takes in an email address. Then the candidate can either:
a) upload their resume (which must include all the information the employer needs)
or
b) route the employer to their on-line profile--which must exist on a prescribed set of sites (e.g., no MySpace pages).
These are just some (not particularly creative) ideas. I'm sure somebody out there has even better ones. But isn't it about time we figure out how to meet both candidate and employer needs when it comes to applying?
The newsletter contains many good reasons for requiring a standard application. For example, applicants often provide you with information you may not want (e.g., membership in advocacy organizations). Applicants also use the most positive spin possible, (over)emphasizing accomplishments and leaving employment gaps. In addition, applicants may not give you all of the information you require, such as dates and salaries.
These are good reasons for requiring a standard application over a resume. But let me play devil's advocate for a minute. Think about the modern candidate experience. In order to apply for a job, you have to oftentimes spend hours--days--searching through job boards and employer "career portals." If you're lucky enough to find a job that appears to be what you want (because of course employers' worst kept secret is they don't tell you the bad parts of a job), you have to complete a lengthy application (each time), or navigate your way through a "modern" applicant tracking system (read: GUI designed by IT).
Qualified candidates--who are hard to find in the first place--get fed up. They don't want to waste their time filling out applications or entering information into your ATS. They may just look for an opportunity that doesn't require them to describe their entire life experience. Hence the resume, which they already have on file and simply requires a quick update.
So how do we reconcile the needs of the employer, who are doing their best to make sure they get the information they need, and the employee, who is trying to efficiently provide the information? I see several solutions:
1) The employer accepts resumes but makes very clear what the resume should contain. No unexplained employment gaps. Salary must be included. Etc.
2) Employers and candidates take advantage of a standardized third-party site that many folks already use for networking purposes (e.g., LinkedIn), again making clear what the profile must contain.
3) Employers use an ATS that takes less than 10 minutes for an applicant to apply.
Or how about a combination? How about giving the candidate options. The candidate must "register" with the employer's ATS but all this takes in an email address. Then the candidate can either:
a) upload their resume (which must include all the information the employer needs)
or
b) route the employer to their on-line profile--which must exist on a prescribed set of sites (e.g., no MySpace pages).
These are just some (not particularly creative) ideas. I'm sure somebody out there has even better ones. But isn't it about time we figure out how to meet both candidate and employer needs when it comes to applying?
Thursday, July 31, 2008
July 2008 Issues of Merit
The U.S. Merit Systems Protection Board (MSPB) just released its July 2008 Issues of Merit and there's at least three articles worth taking a look at:
- Using engagement strategies to retain retirement-ready employees (page 1)
- An overview of accomplishment records (page 6)
- Reference checking: beware speed over quality (page 7)
- Using engagement strategies to retain retirement-ready employees (page 1)
- An overview of accomplishment records (page 6)
- Reference checking: beware speed over quality (page 7)
Monday, July 28, 2008
Font matters
When preparing a resume there are a few guiding principles. Don't make it too long. List only experience that's relevant. Organize the information in a logical way.
Now we can add a new one: use the right font.
In a recent study published on Usability News, the authors found that the font chosen for a resume has a significant impact on how the applicant is perceived.
Using the job of webmaster, the authors presented participants with resumes identical in content but varying in appropriateness of font, from most appropriate (Corbel) to least (Vivaldi).
Results? "Applicants" that used the most appropriate font were judged to be more professional, knowledgeable, mature, experienced, believable, and trustworthy than those that used less appropriate fonts.
Not only that, but those that used Corbel were more likely to be called for an interview!
So think about that the next time you're about to send out an email in pink Comic Sans MS.
Now we can add a new one: use the right font.
In a recent study published on Usability News, the authors found that the font chosen for a resume has a significant impact on how the applicant is perceived.
Using the job of webmaster, the authors presented participants with resumes identical in content but varying in appropriateness of font, from most appropriate (Corbel) to least (Vivaldi).
Results? "Applicants" that used the most appropriate font were judged to be more professional, knowledgeable, mature, experienced, believable, and trustworthy than those that used less appropriate fonts.
Not only that, but those that used Corbel were more likely to be called for an interview!
So think about that the next time you're about to send out an email in pink Comic Sans MS.
Wednesday, July 23, 2008
EEOC releases guidance on religious discrimination
Yesterday the U.S. Equal Employment Opportunity Commission (EEOC) released new guidance documents intended to help individuals learn more about preventing discrimination based on religion.
The new documents include:
- A new Compliance Manual section regarding workplace discrimination based on religion; check out this example from the section on recruitment, hiring, and promotion:
"Darpak, who practices Buddhism, holds a Ph.D. degree in engineering and applied for a managerial position at the research firm where he has worked for ten years. He was rejected in favor of a non-Buddhist candidate who was less qualified. The company vice president who made the promotion decision advised Darpak that he was not selected because “we decided to go in a different direction.” However, the vice president confided to co-workers at a social function that he did not select Darpak because he thought a Christian manager could make better personal connections with the firm’s clients, many of whom are Christian. The vice president’s statement, combined with the lack of any legitimate non-discriminatory reason for selecting the less qualified candidate, as well as the evidence that Darpak was the best qualified candidate for the position, suggests that the proffered reason was a pretext for discrimination against Darpak because of his religious views."
- A Q&A fact sheet that includes this information about when employers need to accommodate applicants and employees:
"Title VII requires an employer, once on notice that a religious accommodation is needed, to reasonably accommodate an employee whose sincerely held religious belief, practice, or observance conflicts with a work requirement, unless doing so would pose an undue hardship. Under Title VII, the undue hardship defense to providing religious accommodation requires a showing that the proposed accommodation in a particular case poses a more than de minimis cost or burden. Note that this is a lower standard for an employer to meet than undue hardship under the Americans with Disabilities Act (ADA) which is defined in that statute as significant difficulty or expense."
- Best practices on eliminating discrimination, including the following:
Sounds like an endorsement of structured interviews if I ever saw one!
The new documents include:
- A new Compliance Manual section regarding workplace discrimination based on religion; check out this example from the section on recruitment, hiring, and promotion:
"Darpak, who practices Buddhism, holds a Ph.D. degree in engineering and applied for a managerial position at the research firm where he has worked for ten years. He was rejected in favor of a non-Buddhist candidate who was less qualified. The company vice president who made the promotion decision advised Darpak that he was not selected because “we decided to go in a different direction.” However, the vice president confided to co-workers at a social function that he did not select Darpak because he thought a Christian manager could make better personal connections with the firm’s clients, many of whom are Christian. The vice president’s statement, combined with the lack of any legitimate non-discriminatory reason for selecting the less qualified candidate, as well as the evidence that Darpak was the best qualified candidate for the position, suggests that the proffered reason was a pretext for discrimination against Darpak because of his religious views."
- A Q&A fact sheet that includes this information about when employers need to accommodate applicants and employees:
"Title VII requires an employer, once on notice that a religious accommodation is needed, to reasonably accommodate an employee whose sincerely held religious belief, practice, or observance conflicts with a work requirement, unless doing so would pose an undue hardship. Under Title VII, the undue hardship defense to providing religious accommodation requires a showing that the proposed accommodation in a particular case poses a more than de minimis cost or burden. Note that this is a lower standard for an employer to meet than undue hardship under the Americans with Disabilities Act (ADA) which is defined in that statute as significant difficulty or expense."
- Best practices on eliminating discrimination, including the following:
- "Employers can reduce the risk of discriminatory employment decisions by establishing written objective criteria for evaluating candidates for hire or promotion and applying those criteria consistently to all candidates.
- In conducting job interviews, employers can ensure nondiscriminatory treatment by asking the same questions of all applicants for a particular job or category of job and inquiring about matters directly related to the position in question."
Sounds like an endorsement of structured interviews if I ever saw one!
Subscribe to:
Posts (Atom)