Celebrating 10 years of the science and practice of matching employer needs with individual talent.
Sunday, January 30, 2011
Job ads: Choose your words wisely
Back in 2007 I began a project through this website to collect survey data from passers-by on words found in job ads. You may have seen a link to the survey on this blog's home page--or even taken the survey itself.
I was motivated to undertake this project because at the time (and this may still be the case) there was very little research on how attractive/effective certain words are in job advertisements. Seems like a simple question, and I was curious.
After 3.5 years I've decided to share what the data shows. My hope was to get a large sample, and I'm settling for just under 150, so take that into account. And of course the generalizability of the results is questionable, although the fact that it was gathered over such a long period of time and the sample group is fairly diverse may help us feel a little more comfortable.
Method
SurveyMonkey. Four questions. Began collecting data in June of 2007, last data point was November of 2010. Questions were followed by a series of fifteen words or phrases generated pretty much off the top of my head with an "Other" option. Unfortunately I didn't think to randomize the presentation of options, so keep that in mind. I had to use two different collectors in SurveyMonkey since I have a free account and they max at 100 responses. When graphics are presented below they are for the first collector only (couldn't download/combine the data sets because of free account), which has more responses but is older data. You can see/take the survey here.
Participants
One hundred and forty-seven blog visitors. I expected most to primarily identify as HR professionals or academics, but most (43%) chose job seeker. The respondents generally fell evenly into the other categories, including just being interested in the subject matter.
Results
Word Frequency
The first question asked participants what words they see most often in job ads.
The #1 answer? By far....."Motivated" Selected by 70-80%, depending on the collector, also the first choice presented.
Other frequent answers (in the 50% vicinity):
- Professional
- Organized
- Works Well Under Pressure
Interestingly, "Motivated" seems to have become more frequent over time, as has "Works Well Under Pressure" and "Flexible". "Independent", "High Energy", and "Friendly" were among the words becoming less frequent.
How about least frequent?
- Conscientious
- Smart
- Friendly
Both "Conscientious" and "Smart" became less frequent over time.
Emotional Response
The second question asked participants to rate their emotional response to the same words or phrases presented in the first question on a seven-point rating scale from "Very Negative" to "Very Positive". Generally most words/phrases received positive responses, and the difference in means between the best-liked ones and least-liked ones was less than one point.
So which received the most positive ratings?
- Motivated
- Reliable
- Professional
- Independent
How about least positive?
- Works Well Under Pressure
- High Energy
- Detail Oriented
I'm guessing that to the extent that there is a difference here, the most popular words seem to describe jobs that would allow applicants a fair amount of flexibility over their work and involve a stimulating work environment. The least positive words are likely associated with fast-paced (hey that would have been a good option) jobs, such as customer service, that may not be particularly stimulating.
Application Intentions
The last question asked respondents to select how likely they would be to apply for a job that contained the same list of words/phrases (knowing no other details). As with emotional response, most responses were associated with high intentions to apply, and the difference between most likely and least likely means was less than one point.
So which led to the highest application intentions? "Professional" and "Reliable" were consistent across the two collectors. "Motivated" became more positive over time, as did "Independent". This is consistent with emotional response.
How about the ones least likely to lead to application intentions? Matching the emotional responses, "High Energy" and "Detail Oriented" were bottom on the list. "Smart" has become less associated with application intentions, as has "Works Well Under Pressure".
Discussion
Judging by this data, it appears that organizations wishing to distinguish themselves using job advertisements should feel comfortable using words that directly speak to applicant personality, such as "conscientious" and "friendly"--these were less frequently found and not associated with particularly negative responses. Organizations should try to use words that imply an environment that allows applicants to use their own judgment when making decisions and stay away from words that imply a hectic, always-on work environment.
Of course all this depends on the particular job being advertised. As we know, presenting candidates with a realistic job preview is immensely helpful (for them as well as the organization), and if the job is heavy customer service, well, it just is. In addition, this data says nothing about the quality of applicants--it may be that higher performing employees prefer different words than lower-performing ones.
I hope at the very least you found these results interesting. It's something to think about when crafting your job ads, and of course one could run a much more sophisticated study by including things like occupation and demographics.
Thursday, January 27, 2011
Jan TIP gems: HRO, UIT, SIOP, and VII
Yes, my goal was to create a blog post title using words no longer than four letters.
Anyway, for those non-SIOP'ers out there, or SIOP'ers that may have missed 'em, there were some gems in the latest issue of TIP:
How I/O can shape the practice of strategic human resources outsourcing (HRO)
A great little study on perceptions of various ways of mitigating cheating on unproctored Internet testing (UIT)
The difference between academics and practitioners in terms of what topics are valued at the SIOP conference (e.g., the latter were more interested in job analysis, staffing, and strategic HR)
Last but not least, a point, counter-point on whether the addition of sex as a protected category under Title VII was a joke
Monday, January 24, 2011
January, 2011 J.A.P.: Interests, trainability tests, interns, and personality tests
The January issue of the Journal of Applied Psychology is out with some great content, so let's jump right in.
First up, an intriguing study by Van Iddekinge, et al. on an interest inventory. Even though many suspect vocational interest plays a part in motivating work behavior, historically the published relationship between interest and job performance has not been strong. The authors of this study created a new measure of interest and using a decent sample size (418) found surprisingly high (corrected) correlation values between scores and various criteria, including job knowledge, job performance, and continuance intentions (mean R = .31). Scores also predicted additional variance beyond cognitive ability and measures of the Big 5 personality dimensions. Could we be on the cusp of a revolution in measures of interest? This could help bridge the gap between KSAs and discretionary effort.
Next, Roth et al. with an update on trainability tests--their predictive validity as well as Black-White score differences. As a refresher, trainability tests are a sub-category of work sample tests that involve a structured period of learning for applicants and are designed to measure how well they can learn a new skill. Previous (limited) research indicated they predict training performance fairly well but job performance less so, and this decreases over time. However, the authors of the current study show using data from a recent video-based trainability exam that the validity may be higher than we thought. Unfortunately it also showed a high level of mean differences between Black and White applicants, matching or exceeding that typically found for cognitive ability (which the test correlated highly with).
Did someone say personality testing? (no, but you knew it was coming) Le et al. are up next with an update on the curvilinear relationship between personality scores and job performance. Using two different samples, the authors found not only the hypothesized curvilinear relationship but that the inflection point (after which the relationship disappears) occurs later in jobs that are more complex--similar to the relationship between experience and performance. So for example, scores on Conscientiousness may correlate with job performance (and OCBs, and CWBs) for higher scores than, say, a retail clerk. Important for anyone making assumptions about what personality inventory scores imply.
Next up is your second personality test article, this time by Landers et al., who provide a warning about faking. The authors noticed a new trend in responses, which they label "blatant extreme responding" (BER; not listed as an X Games sport), indicated by answering all "1"s or "5"s on an inventory. They hypothesize that this is due to a coaching rumor, which seems to have been supported by the fact that internal retesters showed a higher prevalence of BER than the general sample. On the plus side, an interactive warning seems to have reduced the spread. Hard to tell if this is anything new, since we know faking does indeed occur--the debate is over its impact on validity.
Last but not least, Zhao and Liden write about internship programs and the impression management that occurs on the part of interns as well as the organizations. Not surprisingly, interns that wished to get hired by the organization were more likely to use self-promotion and ingratiation, which increased the likelihood of receiving a job offer. Perhaps more interesting is the finding that organizations wishing to hire the interns permanently exhibited more openness to creativity on the part of the interns, which in turn increased the likelihood that interns would apply. Lesson? If you have a good intern that you want to bring on full-time, solicit and be open to their suggestions.
First up, an intriguing study by Van Iddekinge, et al. on an interest inventory. Even though many suspect vocational interest plays a part in motivating work behavior, historically the published relationship between interest and job performance has not been strong. The authors of this study created a new measure of interest and using a decent sample size (418) found surprisingly high (corrected) correlation values between scores and various criteria, including job knowledge, job performance, and continuance intentions (mean R = .31). Scores also predicted additional variance beyond cognitive ability and measures of the Big 5 personality dimensions. Could we be on the cusp of a revolution in measures of interest? This could help bridge the gap between KSAs and discretionary effort.
Next, Roth et al. with an update on trainability tests--their predictive validity as well as Black-White score differences. As a refresher, trainability tests are a sub-category of work sample tests that involve a structured period of learning for applicants and are designed to measure how well they can learn a new skill. Previous (limited) research indicated they predict training performance fairly well but job performance less so, and this decreases over time. However, the authors of the current study show using data from a recent video-based trainability exam that the validity may be higher than we thought. Unfortunately it also showed a high level of mean differences between Black and White applicants, matching or exceeding that typically found for cognitive ability (which the test correlated highly with).
Did someone say personality testing? (no, but you knew it was coming) Le et al. are up next with an update on the curvilinear relationship between personality scores and job performance. Using two different samples, the authors found not only the hypothesized curvilinear relationship but that the inflection point (after which the relationship disappears) occurs later in jobs that are more complex--similar to the relationship between experience and performance. So for example, scores on Conscientiousness may correlate with job performance (and OCBs, and CWBs) for higher scores than, say, a retail clerk. Important for anyone making assumptions about what personality inventory scores imply.
Next up is your second personality test article, this time by Landers et al., who provide a warning about faking. The authors noticed a new trend in responses, which they label "blatant extreme responding" (BER; not listed as an X Games sport), indicated by answering all "1"s or "5"s on an inventory. They hypothesize that this is due to a coaching rumor, which seems to have been supported by the fact that internal retesters showed a higher prevalence of BER than the general sample. On the plus side, an interactive warning seems to have reduced the spread. Hard to tell if this is anything new, since we know faking does indeed occur--the debate is over its impact on validity.
Last but not least, Zhao and Liden write about internship programs and the impression management that occurs on the part of interns as well as the organizations. Not surprisingly, interns that wished to get hired by the organization were more likely to use self-promotion and ingratiation, which increased the likelihood of receiving a job offer. Perhaps more interesting is the finding that organizations wishing to hire the interns permanently exhibited more openness to creativity on the part of the interns, which in turn increased the likelihood that interns would apply. Lesson? If you have a good intern that you want to bring on full-time, solicit and be open to their suggestions.
Thursday, January 20, 2011
Supreme Court upholds NASA background checks
Yesterday, in a unanimous 8-0 decision*, the U.S. Supreme Court handed federal employers a victory when they upheld a background check process used by the National Aeronautics and Space Administration (NASA).
The checks, which have been standard for millions of federal employees since the 1950's, were challenged by a group of contract employees at the Jet Propulsion Laboratory (JPL) in California who previously had not been required to undergo them. The employees claimed the questions asked during the background checks infringed on their "constitutional right to information privacy" rights by asking about things like drug treatment.
In their decision, the court held that "the Government has an interest in conducting basic background checks in order to ensure the security of its facilities and to employ a competent, reliable workforce to carry out the people’s business" and that the challenged questions were "reasonable, employment-related inquiries that further the Government’s interests in managing its internal operations." It also held that there was no meaningful difference between contract employees and civil service employees.
With respect to questions regarding drug use, the court pointed out that the government used follow-up questions regarding treatment as a "mitigating" factor to separate out drug users from those who are taking steps to address and overcome their problems.
They gave the federal government a low hurdle to jump when it comes to business necessity, rejecting the argument that "the Government has a constitutional burden to demonstrate that its employment background questions are “necessary” or the least restrictive means of furthering its interests."
They also touched on reference checks, writing "Asking an applicant’s designated references broad questions about job suitability is an appropriate tool for separating strong candidates from weak ones. The reasonableness of such questions is illustrated by their pervasiveness in the public and private sectors."
Importantly, the court pointed out several times that concerns regarding the collection of personal information by the government were mitigated by the substantial controls in place to protect such information.
You can read the decision here. You can also see the challenged forms here and here.
*Justice Kagan took no part in the decision
The checks, which have been standard for millions of federal employees since the 1950's, were challenged by a group of contract employees at the Jet Propulsion Laboratory (JPL) in California who previously had not been required to undergo them. The employees claimed the questions asked during the background checks infringed on their "constitutional right to information privacy" rights by asking about things like drug treatment.
In their decision, the court held that "the Government has an interest in conducting basic background checks in order to ensure the security of its facilities and to employ a competent, reliable workforce to carry out the people’s business" and that the challenged questions were "reasonable, employment-related inquiries that further the Government’s interests in managing its internal operations." It also held that there was no meaningful difference between contract employees and civil service employees.
With respect to questions regarding drug use, the court pointed out that the government used follow-up questions regarding treatment as a "mitigating" factor to separate out drug users from those who are taking steps to address and overcome their problems.
They gave the federal government a low hurdle to jump when it comes to business necessity, rejecting the argument that "the Government has a constitutional burden to demonstrate that its employment background questions are “necessary” or the least restrictive means of furthering its interests."
They also touched on reference checks, writing "Asking an applicant’s designated references broad questions about job suitability is an appropriate tool for separating strong candidates from weak ones. The reasonableness of such questions is illustrated by their pervasiveness in the public and private sectors."
Importantly, the court pointed out several times that concerns regarding the collection of personal information by the government were mitigated by the substantial controls in place to protect such information.
You can read the decision here. You can also see the challenged forms here and here.
*Justice Kagan took no part in the decision
Monday, January 10, 2011
Why aren't more supervisors held accountable for their hiring mistakes?
Before I begin this rant...err...post, let's create a fantasy world.
No, not the kind with elves and magic (maybe in a future post).
The kind where organizations run efficiently, consistently, and rationally (see I told you it was going to be a fantasy).
So we'll set it up with two assumptions:
1) Job performance is defined for every individual in the organization in clear, meaningful ways.
2) All employees are held accountable for their critical decisions.
Okay, have this fantasy world in your mind? Great. Now that we have that out of the way, I ask you: Why aren't more supervisors in this world held accountable for their hiring decisions?
Think about it. What would you do with a supervisor who repeatedly:
- Made bad decisions about what technology to purchase/implement
- Made ineffective choices regarding how to divvy up team assignments
- Spent time inefficiently and chose to focus on low-priority projects
- Conducted meetings that were too long and had little payoff
Presumably write them up or get rid of them. All of these decisions have to do with the use of resources. Yet I could make a strong argument that the most important resource decision supervisors make is about who to bring into the organization.
So honestly, why aren't more supervisors held accountable? I have several theories:
1) The hires are blamed, not the supervisor. In some cases this makes sense. But when a hiring supervisor demonstrates a pattern of getting it wrong, someone needs to be looking at the decision-makers.
2) The connection between the decision and the consequences is distant. It's easier when a decision has immediate consequences, but when the employee has been around for 2 months, it's a lot easier to focus on their mistakes rather than how they got there to begin with.
3) It's harder to hold supervisors accountable. I think most organizations simply have a harder time quantifying or otherwise seeing supervisory performance in objective ways. Even when performance is measured, too often it focuses on how they manage their employees rather than their selection decisions.
4) People have a hard time giving negative feedback, especially to supervisors. It's hard enough to tell someone they're not meeting specific production goals; it's even harder to pinpoint exactly what the supervisor is doing wrong when hiring people--and then communicate that.
5) It's political. Organizations are typically even more committed to supervisors than rank-and-file because the decision of who to put there likely involved folks higher up in the organization. Thus, to admit a mistake in the supervisor is to admit a mistake in who hired them. Not to mention that supervisors are part of the very management structure that has issues in the first place.
6) It's not clearly measured as part of the job of a supervisor. Okay, I realize this is somewhat against the rules of our fantasy world, but I suspect this is often a big problem. Supervisors are usually measured against things like "leadership" or "administrative ability", not "the percentage of new hires who receive excellent performance ratings after 6 months."
So what can we do about this? Well I think some of the answers present themselves when we see what some of the problems might be above. Clearly document that hiring success will be considered a key performance metric for supervisors. Provide them training on the topic and feedback. Have consequences for those who keep getting it wrong (which may simply be taking the decision largely out of their hands).
Should HR be more involved? Almost certainly. It is after all HR's duty to maximize human performance in an organization. We're supposed to be the experts on this stuff--if we can't articulate the issue and suggest alternatives, why would we expect line management to?
Finally, let me end on something else to think about: Does your organization praise supervisors who repeatedly make good hiring decisions? We know from basic psychology that positive reinforcement is more likely to result in repeat behavior than punishment or negative reinforcement. But which do we normally use?
Now back to reality...
Thursday, January 06, 2011
IPAC Call for Proposals Open
The Call for Proposals for the 2011 IPAC Conference is now open until January 31st. The Program Committee is seeking proposals for symposia, panel discussions, paper presentations, tutorials, and pre-conference workshops.
The conference will be held July 17-20 in Washington, D.C. at the Dupont Hotel and promises to match previous events in terms of quality and networking opportunities.
Subscribe to:
Posts (Atom)