It's easy to give out advice. HR does it all the time.
But how well does HR practice what it preaches?
In the most recent issue of the U.S. Merit System Protection Board's (MSPB) newsletter, Issues of Merit, in an article titled "Taking our own advice", the author describes attempts by MSPB to ensure they are walking the walk when it comes to hiring procedures.
So what did they find?
1) Their job announcements weren't all that attractive. They contained jargon and too much information. Perhaps more importantly, they didn't "sell" the job.
Solutions: Job description was refined and rewritten so qualifications were easier to understand, got rid of excess information, and made easier to read. In addition, they added a job preview component that helps applicants decide if the job is a good fit.
2) They were using questionnaires as an initial screen that had low validity.
Solution: Questionnaires were replaced with an accomplishment record, which they hope (and research suggests) will better predict who will succeed in the job.
3) They were using the "rule of three" for external hires which limited their ability to consider a broad candidate group.
Solution: "Rule of three" replaced with category rating, which allows them to consider more candidates.
4) Recruitment methods weren't as broad as they could be.
Solution: MSPB worked with OPM to feature their jobs prominently through USAJOBS, the federal government's online job posting site. In addition they made greater efforts to actively seek out qualified candidates whose resumes were in USAJOBS. Finally, they used professional organizations to help advertise their opportunities.
An honest review of recruitment and assessment procedures in any organization will undoubtedly result in areas for improvement. Kudos to MSPB for following their own advice.
Tuesday, January 29, 2008
It's easy to give out advice. HR does it all the time.
Friday, January 25, 2008
The January 2008 issue of the Journal of Applied Psychology is out. Unfortunately there are only three articles directly related to recruitment and assessment, but they're pretty good ones, so let's dive in.
First up, a Monte Carlo investigation of the impact of faking on personality tests by Komar, et al. "What is a Monte Carlo investigation?" you may ask. Essentially it's when researchers use computers to simulate data scenarios rather than collecting actual data from participants/subjects/victims. Anyway, the researchers looked at the impact on the criterion-related validity (as measured by supervisory ratings) of conscientiousness scores adjusting for various "faking" scenarios. They found that the validity is impacted by a variety of factors, most notably proportion of fakers, magnitude of faking, and the relationship between faking and performance. Another shot across the bow of self-report personality inventories, methinks, although the debate will no doubt continue!
Next a fascinating study of motherhood bias in both expectations and screening decisions by Heilman and Okimoto. The researchers found a bias against both male and female parents when it came to anticipated job commitment, achievement striving, and dependability, although anticipated competence was uniquely low for mothers and seemed to be the major contributing factor to lowered expectations and screening recommendations. An unfortunate reminder that these factors do matter and something to watch out for. The results are reminiscent of negative behavior toward "pregnant" women found in a previous study.
Finally, Zyphur, Chaturvedi, and Arvey present a discussion of job performance. They address two subjects: the impact of past performance on future performance and individual differences in performance trajectories. Analyzing past literature, the authors note that performance feedback influences future performance directly and different individuals do have different latent performance trajectories, which has big implications for selection. Why? Because many assessment techniques (e.g., T&Es, behavioral interviews) rely on an general assumption that more experience equals better performance. This study adds ammunition to those that argue that assumption has serious flaws (or at least is overly simplistic).
In addition to these three, you may find the following interesting as well:
Challenging conventional wisdom about who quits: Revelations from corporate America. (great stuff for those of you interested in retention)
Effectiveness of error management training: A meta-analysis. (for all you trainers out there)
Effects of task performance, helping, voice, and organizational loyalty on performance appraisal ratings. (for those interested in performance ratings)
Wednesday, January 23, 2008
Despite aggressive (and creative) recruiting tactics, and a $4B budget, the U.S. Army failed to meet its recruiting goal in 2007, according to a report by the National Priorities Project.
Even worse than missing the numbers, the Army continues to see a change in who is being recruited. For the third consecutive year they failed to get sufficient numbers of recruits with high school diplomas, which the Army reports is the single best predictor of successful completion of a first term of enlistment.
In addition, historically, the Army had a goal of at least 67% of recruits landing in at least the 50th percentile of the Armed Forces Qualification Test (AFQT, which tests word knowledge, paragraph comprehension, arithmetic reasoning, and mathematics knowledge; more details here). Since 2005 this number has fallen and in 2007 it was 60.8%.
The Army puts educational attainment and AFQT scores together for a measure of "quality." According to the report:
A ‘high quality’ recruit is one who scores at or above the 50th percentile on the AFQT, and who is tier I (has a regular high school diploma or better). The DoD strives to have all recruits be ‘high quality’ as these recruits will be more likely to complete contracted enlistment terms and perform better in training and on the job.
Unfortunately the percentage of "high quality" recruits continues to drop. In 2005 it was 56.2%. Last year it was 44.6%.
The report includes state-by-state tables as well as a discussion of what the implications of this recruiting challenge are for the Army. Worth a look.
Monday, January 21, 2008
The U.S. Equal Employment Opportunity Commission (EEOC) recently issued a report titled Improving the participation rate of people with targeted disabilities in the federal workforce.
Long title aside, the report offers all employers some concrete suggestions for increasing the hiring rate of individuals with disabilities, a chronically underemployed segment of the population. These recommendations include some that you probably have already thought of (establish a task force, train your managers, recruit broadly) but also include some you may not have:
- Ensure hired individuals can perform the essential functions of the position (so you don't set people up for failure)
- Use a panel interview to minimize bias of a single individual (which we know is a best practice for interviews generally)
- Use clear language and avoid jargon on job announcements (something we should be doing anyway)
- Encourage people with disabilities to self-identify and ensure confidentiality (your numbers may be artificially low)
There's a lot of good information and resources in the report that go beyond the federal workforce. Definitely worth checking out.
Thursday, January 17, 2008
You've heard of Chief Knowledge Officer, Chief Fun Officer, and even Chief Evangelist.
But Chief Magic Official?
One guess as to who would be filling that job title.
If you guessed Disney, go to the head of the class.
Yes, Disney is out to hire it's first "CMO" who will appear periodically to grant "dreams" to guests at Disney Parks in Anaheim and Orlando this year. Disney's put together a great recruiting website that includes:
- A Magic Aptitude Test (M.A.T.) that you can take to see if you qualify--you have to take a look at it just to see the pencil you'll be using for the test (by the way, I passed and am apparently similar to Mickey Mouse)
- A great job preview video
- A creative job description and statement of qualifications
In a new twist, applications must include a video resume that will be voted on online. The top three vote getters will be invited (along with three guests) to Walt Disney World resort for further vetting.
So will this work? Probably. It is Disney, after all, who doesn't usually have too much trouble attracting candidates. And some research indicates applicants are more attracted to creative job titles.
But whatever happens, you gotta admire their creativity!
Tuesday, January 15, 2008
The U.S. Equal Employment Opportunity Commission, as part of its E-RACE Initiative, has produced two new public service announcements with renowned jazz musician Wynton Marsalis focusing on discrimination and equal opportunity. They're short and to the point, and you can see them here.
Friday, January 11, 2008
Registration is now open for SIOP's Annual Conference, to be held April 10-12 in San Francisco.
Cost for early registration (before 2/29): $140 for members, $330 for nonmembers, $85 for students.
Workshops will be held on the 9th and are $400 for members, $650 for nonmembers, and this includes two workshops.
Good things about the SIOP conference: tons of presentations and lots of technical information.
Bad things about the SIOP conference: tons of presentations and lots of technical information.
Thursday, January 10, 2008
In a previous post I described one of the most interesting articles in the December '07 issue of IJSA (the International Journal of Selection and Assessment (IJSA)). In this post I'll go over the rest of the issue.
First up, Carless & Imber with a study of interviewer characteristics. Using a sample of 450 graduate students, they found that personal aspects of the interviewer, such as warmth, friendliness, and humor, as well as job-related aspects such as job knowledge and general competence, had effects on how attracted the person was to the job as well as their likely job choice intentions. Not only that, aspects of the interviewer impacted how anxious the interviewee was. Lesson: Much like recruiters, make sure people doing the interviewing are trained and the type of people you'd like representing your organization!
Next, LaHuis, MacLane, and Schlessman with a study of reapplication among 542 applicants to a U.S. government position. Focusing on the 9% that didn't get the job but reapplied the following year, the authors found that "opportunity to perform" played a significant role in reapplication behavior. Lesson: If you want people to reapply, give them the chance to show their strengths.
Third, Carless and Wintle with a study of what attracts applicants to particular job opportunities--specifically looking at flexible career paths and who's doing the recruiting. Participants were 201 "young, inexperienced job seekers" who completed a questionnaire. Results? Flexible career paths were a big attraction (compared to traditional career paths) but recruiter background (HR or external agency) made no difference, in line with previous research that's found that recruiter personality is the key rather than things like background or demographics.
Next, Hermelin, Lievens, and Robertson conducted a meta-analysis of assessment center scores. Based on 26 studies (N=5,850) the authors found a corrected correlation of .28 between AC scores and subsequent supervisory ratings, which they hypothesize is lower than the true value due to range restriction of assessment center scores (and lower than the corrected value of .37 that Schmidt & Hunter reported). Alternate version (if the link gets fixed) here.
Heinsman, et al. provide the next study, which looks at how psychologists view the makeup and measure of competencies, still a hot topic in HR circles. Using data from over 900 applicants who participated in a 1-day selection process, they examined the relationship between competencies including Thinking, Feeling, and Power with traditional measures of cognitive ability, personality, and assessment center performance. Results? To assess Thinking, psychologists in this study relied upon measures of cognitive ability (makes sense!). To assess Feeling, they used interview simulation scores as well as personality tests scores. Finally, when analyzing Power, they relied mostly on personality scores.
Last but not least, Lievens and Anseel with a fascinating study of creating alternate versions of computerized in-baskets. The alternate version used a very similar situation that people had to respond to, but altered the context for 10 of the 20 items. Results? No significant difference in overall in-basket score between the two forms. So if you're looking to duplicate your in-baskets, check this out! Oh, and there's an alternate version of the article here.
Okay, I might as well tell you about the other two articles, because you might be interested. One is on self and peer ratings and one is on survey non-respondents (hint: it's not the star performers that aren't responding). Both have interesting results, so check 'em out!
Monday, January 07, 2008
The December '07 issue of the International Journal of Selection and Assessment (IJSA) has a lot of great content, so I decided to split it up between two posts. This post will be devoted to one of the studies because it's such a hot topic--web-based recruitment.
In this study, Van Hoye and Lievens gathered data from 108 applicants for head nurse positions in Belgium and looked at how they responded to two types of web-based information. The first was employee testimonials ("click here to see our happy employees talk about their jobs!"), a very common approach used by many organizations including Enterprise, Google, and the U.S. government.
The second was employer recommendations via word of mouth; in this case because it was over the web, the authors dubbed it word-of-mouse ("hey, did you hear about the opportunity at XWZ? They're a great place to work.").
Results? Well, there are several, and they all link back to source credibility.
First, word-of-mouse was a more powerful attractant than testimonials. Why? It appears the testimonials are seen as obviously controlled by the organization (and therefore possibly misleading), whereas word-of-mouse is seen as more credible. However, if you're going to use testimonials, those that focus on the individual were much more powerful than those focusing on the organization. The reverse was true for word-of-mouse: a focus on the organization was more powerful than on the employees (which underlines the importance of employer brand).
The big result is that considering all four combinations, word-of-mouse information about the organization was the clear winner in terms of attracting applicants. So efforts designed to increase the likelihood of people spreading the good word about your organization are likely to pay off. How do we do that? The authors offer some focus areas:
- image management
- campus recruitment
- building relationships with key opinion leaders (e.g., career counselors, class presidents)
- employee referral programs
I would add another: make sure your employees like what they do! Happy employees are hands down one of your most effective recruiting techniques (assuming you can't offer millions in stock options).
This is a keeper, folks, and there's a version here thanks to Dr. Lievens.