Tuesday, February 05, 2008

Using personality tests to prevent turnover

Turnover is costly and a major headache for most employers that becomes an even bigger problem when the labor market tightens. Can different hiring practices help solve the problem?

Perhaps, says Texas A&M professor Ryan Zimmerman, whose work in the area has recently been profiled in various places, including the national press, recruiting websites, and SIOP (article no longer available except cached version).

So what has Dr. Zimmerman found? We'll know more when his meta-analysis research is published in an upcoming Personnel Psychology issue, but in a nutshell his research found that three aspects of the Big 5 have been linked to turnover:

- Agreeableness
- Conscientiousness
- Emotional stability

Zimmerman argues that selecting for people based on these qualities is just as powerful as, say, how the job is designed.

Some things to consider while we wait for the research to be published:

- Survey after survey show that quality of supervision is a powerful retention factor (check out the Gallup 12 to see an example). This suggests selecting good supervisors may be an equally powerful factor.

- Other research, including some by Zimmerman himself, suggests person-organization (P-O) fit may also be a significant contributing factor to turnover. P-O fit is a combination of both the person and the environment, so personality would be just one side of the equation.

- Many people forget there is positive and negative turnover--not all turnover is bad because there are some people who simply aren't a good fit for the job. Simply finding a correlation between one factor and turnover is the beginning of the story, not the end.

- Average job tenure, particularly among younger generations, is around 2-3 years. So is turnover really what we should be focusing on?

Stay tuned here for more details when the research gets published.

Tuesday, January 29, 2008

Et tu, HR?

It's easy to give out advice. HR does it all the time.

But how well does HR practice what it preaches?

In the most recent issue of the U.S. Merit System Protection Board's (MSPB) newsletter, Issues of Merit, in an article titled "Taking our own advice", the author describes attempts by MSPB to ensure they are walking the walk when it comes to hiring procedures.

So what did they find?

1) Their job announcements weren't all that attractive. They contained jargon and too much information. Perhaps more importantly, they didn't "sell" the job.

Solutions: Job description was refined and rewritten so qualifications were easier to understand, got rid of excess information, and made easier to read. In addition, they added a job preview component that helps applicants decide if the job is a good fit.

2) They were using questionnaires as an initial screen that had low validity.

Solution: Questionnaires were replaced with an accomplishment record, which they hope (and research suggests) will better predict who will succeed in the job.

3) They were using the "rule of three" for external hires which limited their ability to consider a broad candidate group.

Solution: "Rule of three" replaced with category rating, which allows them to consider more candidates.

4) Recruitment methods weren't as broad as they could be.

Solution: MSPB worked with OPM to feature their jobs prominently through USAJOBS, the federal government's online job posting site. In addition they made greater efforts to actively seek out qualified candidates whose resumes were in USAJOBS. Finally, they used professional organizations to help advertise their opportunities.

An honest review of recruitment and assessment procedures in any organization will undoubtedly result in areas for improvement. Kudos to MSPB for following their own advice.

Friday, January 25, 2008

January '08 issue of J.A.P.

The January 2008 issue of the Journal of Applied Psychology is out. Unfortunately there are only three articles directly related to recruitment and assessment, but they're pretty good ones, so let's dive in.

First up, a Monte Carlo investigation of the impact of faking on personality tests by Komar, et al. "What is a Monte Carlo investigation?" you may ask. Essentially it's when researchers use computers to simulate data scenarios rather than collecting actual data from participants/subjects/victims. Anyway, the researchers looked at the impact on the criterion-related validity (as measured by supervisory ratings) of conscientiousness scores adjusting for various "faking" scenarios. They found that the validity is impacted by a variety of factors, most notably proportion of fakers, magnitude of faking, and the relationship between faking and performance. Another shot across the bow of self-report personality inventories, methinks, although the debate will no doubt continue!

Next a fascinating study of motherhood bias in both expectations and screening decisions by Heilman and Okimoto. The researchers found a bias against both male and female parents when it came to anticipated job commitment, achievement striving, and dependability, although anticipated competence was uniquely low for mothers and seemed to be the major contributing factor to lowered expectations and screening recommendations. An unfortunate reminder that these factors do matter and something to watch out for. The results are reminiscent of negative behavior toward "pregnant" women found in a previous study.

Finally, Zyphur, Chaturvedi, and Arvey present a discussion of job performance. They address two subjects: the impact of past performance on future performance and individual differences in performance trajectories. Analyzing past literature, the authors note that performance feedback influences future performance directly and different individuals do have different latent performance trajectories, which has big implications for selection. Why? Because many assessment techniques (e.g., T&Es, behavioral interviews) rely on an general assumption that more experience equals better performance. This study adds ammunition to those that argue that assumption has serious flaws (or at least is overly simplistic).

In addition to these three, you may find the following interesting as well:

Challenging conventional wisdom about who quits: Revelations from corporate America. (great stuff for those of you interested in retention)

Effectiveness of error management training: A meta-analysis. (for all you trainers out there)

Effects of task performance, helping, voice, and organizational loyalty on performance appraisal ratings. (for those interested in performance ratings)

Wednesday, January 23, 2008

U.S. Army recruiting challenges continue

Despite aggressive (and creative) recruiting tactics, and a $4B budget, the U.S. Army failed to meet its recruiting goal in 2007, according to a report by the National Priorities Project.

Even worse than missing the numbers, the Army continues to see a change in who is being recruited. For the third consecutive year they failed to get sufficient numbers of recruits with high school diplomas, which the Army reports is the single best predictor of successful completion of a first term of enlistment.

In addition, historically, the Army had a goal of at least 67% of recruits landing in at least the 50th percentile of the Armed Forces Qualification Test (AFQT, which tests word knowledge, paragraph comprehension, arithmetic reasoning, and mathematics knowledge; more details here). Since 2005 this number has fallen and in 2007 it was 60.8%.

The Army puts educational attainment and AFQT scores together for a measure of "quality." According to the report:

A ‘high quality’ recruit is one who scores at or above the 50th percentile on the AFQT, and who is tier I (has a regular high school diploma or better). The DoD strives to have all recruits be ‘high quality’ as these recruits will be more likely to complete contracted enlistment terms and perform better in training and on the job.

Unfortunately the percentage of "high quality" recruits continues to drop. In 2005 it was 56.2%. Last year it was 44.6%.

The report includes state-by-state tables as well as a discussion of what the implications of this recruiting challenge are for the Army. Worth a look.

Monday, January 21, 2008

Tips for hiring more individuals with disabilities

The U.S. Equal Employment Opportunity Commission (EEOC) recently issued a report titled Improving the participation rate of people with targeted disabilities in the federal workforce.

Long title aside, the report offers all employers some concrete suggestions for increasing the hiring rate of individuals with disabilities, a chronically underemployed segment of the population. These recommendations include some that you probably have already thought of (establish a task force, train your managers, recruit broadly) but also include some you may not have:

- Ensure hired individuals can perform the essential functions of the position (so you don't set people up for failure)

- Use a panel interview to minimize bias of a single individual (which we know is a best practice for interviews generally)

- Use clear language and avoid jargon on job announcements (something we should be doing anyway)

- Encourage people with disabilities to self-identify and ensure confidentiality (your numbers may be artificially low)

There's a lot of good information and resources in the report that go beyond the federal workforce. Definitely worth checking out.

Thursday, January 17, 2008

Wanted: Chief Magic Official

You've heard of Chief Knowledge Officer, Chief Fun Officer, and even Chief Evangelist.

But Chief Magic Official?

One guess as to who would be filling that job title.

If you guessed Disney, go to the head of the class.

Yes, Disney is out to hire it's first "CMO" who will appear periodically to grant "dreams" to guests at Disney Parks in Anaheim and Orlando this year. Disney's put together a great recruiting website that includes:

- A Magic Aptitude Test (M.A.T.) that you can take to see if you qualify--you have to take a look at it just to see the pencil you'll be using for the test (by the way, I passed and am apparently similar to Mickey Mouse)

- A great job preview video

- A creative job description and statement of qualifications

In a new twist, applications must include a video resume that will be voted on online. The top three vote getters will be invited (along with three guests) to Walt Disney World resort for further vetting.

So will this work? Probably. It is Disney, after all, who doesn't usually have too much trouble attracting candidates. And some research indicates applicants are more attracted to creative job titles.

But whatever happens, you gotta admire their creativity!

Tuesday, January 15, 2008

Marsalis jazzes up new EEOC PSAs

The U.S. Equal Employment Opportunity Commission, as part of its E-RACE Initiative, has produced two new public service announcements with renowned jazz musician Wynton Marsalis focusing on discrimination and equal opportunity. They're short and to the point, and you can see them here.

Friday, January 11, 2008

SIOP Conference Registration Open

Registration is now open for SIOP's Annual Conference, to be held April 10-12 in San Francisco.

Cost for early registration (before 2/29): $140 for members, $330 for nonmembers, $85 for students.

Workshops will be held on the 9th and are $400 for members, $650 for nonmembers, and this includes two workshops.

Good things about the SIOP conference: tons of presentations and lots of technical information.

Bad things about the SIOP conference: tons of presentations and lots of technical information.

Thursday, January 10, 2008

December '07 IJSA, Part 2

In a previous post I described one of the most interesting articles in the December '07 issue of IJSA (the International Journal of Selection and Assessment (IJSA)). In this post I'll go over the rest of the issue.

First up, Carless & Imber with a study of interviewer characteristics. Using a sample of 450 graduate students, they found that personal aspects of the interviewer, such as warmth, friendliness, and humor, as well as job-related aspects such as job knowledge and general competence, had effects on how attracted the person was to the job as well as their likely job choice intentions. Not only that, aspects of the interviewer impacted how anxious the interviewee was. Lesson: Much like recruiters, make sure people doing the interviewing are trained and the type of people you'd like representing your organization!

Next, LaHuis, MacLane, and Schlessman with a study of reapplication among 542 applicants to a U.S. government position. Focusing on the 9% that didn't get the job but reapplied the following year, the authors found that "opportunity to perform" played a significant role in reapplication behavior. Lesson: If you want people to reapply, give them the chance to show their strengths.

Third, Carless and Wintle with a study of what attracts applicants to particular job opportunities--specifically looking at flexible career paths and who's doing the recruiting. Participants were 201 "young, inexperienced job seekers" who completed a questionnaire. Results? Flexible career paths were a big attraction (compared to traditional career paths) but recruiter background (HR or external agency) made no difference, in line with previous research that's found that recruiter personality is the key rather than things like background or demographics.

Next, Hermelin, Lievens, and Robertson conducted a meta-analysis of assessment center scores. Based on 26 studies (N=5,850) the authors found a corrected correlation of .28 between AC scores and subsequent supervisory ratings, which they hypothesize is lower than the true value due to range restriction of assessment center scores (and lower than the corrected value of .37 that Schmidt & Hunter reported). Alternate version (if the link gets fixed) here.

Heinsman, et al. provide the next study, which looks at how psychologists view the makeup and measure of competencies, still a hot topic in HR circles. Using data from over 900 applicants who participated in a 1-day selection process, they examined the relationship between competencies including Thinking, Feeling, and Power with traditional measures of cognitive ability, personality, and assessment center performance. Results? To assess Thinking, psychologists in this study relied upon measures of cognitive ability (makes sense!). To assess Feeling, they used interview simulation scores as well as personality tests scores. Finally, when analyzing Power, they relied mostly on personality scores.

Last but not least, Lievens and Anseel with a fascinating study of creating alternate versions of computerized in-baskets. The alternate version used a very similar situation that people had to respond to, but altered the context for 10 of the 20 items. Results? No significant difference in overall in-basket score between the two forms. So if you're looking to duplicate your in-baskets, check this out! Oh, and there's an alternate version of the article here.

Okay, I might as well tell you about the other two articles, because you might be interested. One is on self and peer ratings and one is on survey non-respondents (hint: it's not the star performers that aren't responding). Both have interesting results, so check 'em out!

Monday, January 07, 2008

December '07 IJSA, Part 1

The December '07 issue of the International Journal of Selection and Assessment (IJSA) has a lot of great content, so I decided to split it up between two posts. This post will be devoted to one of the studies because it's such a hot topic--web-based recruitment.

In this study, Van Hoye and Lievens gathered data from 108 applicants for head nurse positions in Belgium and looked at how they responded to two types of web-based information. The first was employee testimonials ("click here to see our happy employees talk about their jobs!"), a very common approach used by many organizations including Enterprise, Google, and the U.S. government.

The second was employer recommendations via word of mouth; in this case because it was over the web, the authors dubbed it word-of-mouse ("hey, did you hear about the opportunity at XWZ? They're a great place to work.").

Results? Well, there are several, and they all link back to source credibility.

First, word-of-mouse was a more powerful attractant than testimonials. Why? It appears the testimonials are seen as obviously controlled by the organization (and therefore possibly misleading), whereas word-of-mouse is seen as more credible. However, if you're going to use testimonials, those that focus on the individual were much more powerful than those focusing on the organization. The reverse was true for word-of-mouse: a focus on the organization was more powerful than on the employees (which underlines the importance of employer brand).

The big result is that considering all four combinations, word-of-mouse information about the organization was the clear winner in terms of attracting applicants. So efforts designed to increase the likelihood of people spreading the good word about your organization are likely to pay off. How do we do that? The authors offer some focus areas:

- image management
- campus recruitment
- building relationships with key opinion leaders (e.g., career counselors, class presidents)
- employee referral programs
- internships

I would add another: make sure your employees like what they do! Happy employees are hands down one of your most effective recruiting techniques (assuming you can't offer millions in stock options).

This is a keeper, folks, and there's a version here thanks to Dr. Lievens.

Wednesday, January 02, 2008

HR World's Top HR Blogs

It's always nice to ring in the new year with some good news, which is why I was pleased to see this blog on HR World's Top 25 HR Blogs.

Hope you and yours enjoyed a very happy holiday season!

Monday, December 31, 2007

Content of the year

At the end of this, the first full year of this blog's existence, I decided to take a look back at 2007 and give you my Top 5 most popular posts of the year:

1. Jobfox plays matchmaker (there continues to be significant interest in Jobfox and their non-traditional approach to matching applicants with employers)

2. Reliability and validity--it's okay to Despair(.com). Whether it's the statistics words or Despair, I'll never know. But people sure like those little posters (and remember, you can make your own).

3. Personality testing basics (Part 2). As you can see from the sidebar survey, folks continue to be very interested in personality testing.

4. Wonderlic revises their cognitive ability test. Wonderlic, one of the oldest and most famous testing companies, continues to generate interest.

5. Checkster and SkillSurvey automate reference checking. There's further development to be had, but I do believe these tools could be a boon to HR and supervisors alike.

Okay, so enough about me. What about what everyone else is writing about? Here are my nominations for content of the year:

1. Morgeson et al. fired a shot across the bow of personality testing with their piece in Personnel Psychology that resulted in multiple, shall we say, not so thrilled responses. I don't know where this debate is going (although I suspect alternate measurement methods will play a part) but it sure is fun to watch!

2. There were some great books I came across this year. Particular props for Understanding statistics, Evidence-based management, and Personality and the fate of organizations. Yes, they were all published in 2006...are you saying I'm behind?

3. Dineen et al.'s great little piece of research on P-O fit and website design in the March issue of J.A.P. that I wrote about here. Take a look at your career website with these results in mind.

4. The Talent Unconference was a big success, and I'm very thankful that many of the presentations were videotaped; I put up links to some of them here

5. McDaniel et al.'s meta-analysis of situational judgment test instructions. Not only is this a great piece of research, it's (still) free!

So what about my New Years wish from last year? I'm still waiting. Although if people search databases like Spock eventually get up enough steam...perhaps I'll get my wish?

Here's to hoping 2008 is filled with interesting and useful things!

Monday, December 17, 2007

Call for Proposals for 2008 IPMAAC Conference

You're invited to submit a proposal for the 2008 IPMAAC Conference, to be held on June 8-11. The theme of next year's conference is Building Bridges: Recruitment, Selection, and Assessment and the conference will be in Oakland, California in the beautiful San Francisco Bay Area.

IPMAAC (IPMA-HR Assessment Council) is the leading organization for HR assessment and selection professionals interested in such diverse topics as job analysis, personality testing, recruiting, and web-based assessment.

IPMAAC members are psychologists, attorneys, consultants, management consultants, academic faculty and students, and other professionals who share a passion for finding and hiring the right person for the job. Whether you're from the public sector, private sector, or non-profit, everyone is welcome.

Please consider submitting a proposal for the 2008 conference--it could be a presentation, a tutorial, a panel discussion, a symposium, or a pre-conference workshop. If you have something you'd like to share, don't be shy.

Even if you don't submit a proposal, plan on attending the conference. It's a great opportunity to meet knowledgeable and passionate people and learn what everyone else is doing out there. Some presentations at last year's conference, held in New Orleans, included:

Succession planning and talent management

Using personality assessments in community service organizations

Legal update

Potholes on the road to Internet Applicant compliance

See you there!

Wednesday, December 12, 2007

HR.com's Virtual Conference

If you want to see something creative, head on over to HR.com's virtual conference called VIEW, which is taking place today and tomorrow.

HR.com says this conference, which is very Second Life-ish, will have 40+ speakers, 1000+ attendees and 70+ vendors.

Right now I'm watching Carly Fiorina talk about leadership. Later presentations include:

Managing a Talent Pool for Succession Planning

The Federal E-Verify Program and Electronic I-9 Compliance

Quality of Hire

Creating Value Exchange in the Candidate Experience

...and a lot more. Creative stuff! And oh yah, it's free.

Monday, December 10, 2007

November '07 Issue of J.A.P.

The November 2007 issue of the Journal of Applied Psychology is full of interesting articles, including several relating to recruiting and assessment. Let's take a look:

First, a field study by Hebl et al. on pregnancy discrimination. Female confederates posed as job applicants or customers at retail stores, sometimes wearing a pregnancy prosthesis. As "pregnant" customers, they received more "benevolent" behavior (e.g., touching and over-friendliness), but as job applicants they received more hostile behavior (e.g., rudeness). The latter effect was particularly noticeable when confederates applied for stereotypically male jobs. This isn't a form of discrimination that gets as much play as others, but may be much more common than we think. My guess is a lot of people associate pregnancy with impending time off and don't focus as much on the competencies these women bring to the job.

Second, a study on faking. But wait, not faking on personality tests, faking during interviews. Levashina and Campion developed an interview faking behavior scale and then tested it with actual interviews. Guess what? Scores on the scale correlated with getting a second interview. (Looks like those classes you took on answering vaguely are going to pay off!) But wait, there's more. The authors also found that behavioral questions were more resistant to faking than situational questions (another reason to use 'em!), and follow-up questions INCREASED faking (another reason NOT to 'use em!). Other goodies in this article: over 90% of undergraduate job candidates fake during employment interviews (I assume that's just this sample), BUT, the percentage that were actually lying, or close to it, was less (28-75%).

Third, Brockner et al. provide research results that underline how important procedural fairness (justice) is. Three empirical studies demonstrated that employees judge organizations as being more responsible for negative outcomes when they experienced low procedural fairness. So when applicants or employees get bad news, they'll blame the organization even more if they feel the process used was unfair. Why do we care? Because perceptions of procedural fairness impact all kinds of things, including recruiting (e.g., how someone reacts to not getting a job) and the likelihood of filing a lawsuit (for, say, discrimination).

Fourth, Lievens, Reeve and Heggestad with a look at the impact of people re-taking cognitive ability tests. Using a sample of 941 candidates for medical school that took an admissions exam with a cognitive component, the authors found that retesting introduced both measurement and predictive bias: the retest scores appeared to be measuring memory rather than g, and predictive validity (of GPA) was eliminated. More evidence that re-testing effects are non-trivial. Pre-publication version here.

Last but definitely not least
, one of my favorite topics--web-based recruitment. Allen, Mahto, & Otondo present results from 814 students searching real websites. When controlling for a student's image of the employer, job and organizational information correlated with their intention to pursue employment. When controlling for information search, a student's image of the employer was related to the intention to pursue employment, but familiarity with the employer was not. Finally, attitudes about recruitment source influenced attraction and partially mediated the effects of organizational information. What does all this mean? Don't throw your eggs into one basket--organizational image is important, but so is the specific information you have on your website about your organization and the specific job.

There's a lot of other good stuff in this volume, including articles on the financial impact of specific HRM practices, a meta-analysis of telecommuting impacts, engaging older workers, and daily mood.

Wednesday, December 05, 2007

EEOC Issues Fact Sheet on Employment Testing

On Monday, the U.S. Equal Employment Opportunity Commission (EEOC) issued a fact sheet on employment testing.

The fact sheet covers, at a high level, various topics including:

- types of tests

- related EEO laws (Title VII, ADA, ADEA) and burden-shifting scenarios

- the Uniform Guidelines on Employee Selection Procedures

- recent related EEOC litigation and settlements

- employer best practices

For those of you familiar with the legal context of employment testing, this isn't new information. But it is a nice quick summary of some of the major points, and could be very useful for someone not as familiar with this area.

For a more thorough treatment, I recommend the U.S. Department of Labor's guide for employers on testing and selection.

Monday, December 03, 2007

Winter '07 Personnel Psychology

Things are starting to heat up in the journal Personnel Psychology. The shot across the bow of personality testing that happened in the last issue of Personnel Psychology turns into a full-blown brawl in this issue. But first, let's not forget another article worth out attention...

First up, Berry, Sackett, and Landers revisit the issue of the correlation between interview and cognitive ability scores. Previous meta-analyses have found this value to be somewhere between .30 and .40. Using an updated data set, excluding samples in which interviewers likely had access to ability scores, and more accurately calculating range restriction, the authors calculate a corrected r of .29 based on the entire applicant pool. This correlation is even smaller when interview structure is high, when the interview is behavioral description rather than situational or composite, and job complexity is high. Why is this important? Because it impacts what other tests you might want to use--the authors point out that using their updated numbers they obtained a multiple correlation of .66 for a high structure interview combined with a cognitive ability test (using Schmidt & Hunters' methods and numbers). Pretty darn impressive.


Now that we have that under our belt, ready for the main event? As I said, in last issue Morgeson et al. came out quite strongly against the use of self-report personality tests in selection contexts--primarily because they claim the uncorrected criterion-related validity coefficients are so small. So it's not surprising that this edition contains two articles by personality researcher heavyweights defending their turf...

First, Tett & Christiansen raise several points; more than I have space for here. Some points include: considering conditions under which personality tests are used and validity coefficients aggregated; that there are occupational differences to consider; that coefficients found so far aren't as high as they could be if we used more sophisticated approaches like personality-oriented job analysis; and that coefficients increase when multiple trait measures are used. This sums their points up nicely: "Overall mean validity ignores situational specificity and can seriously underestimate validity possible under theoretically and professional prescribed conditions."

Second, Ones, Dilchert, Viswesvaran, and Judge come out swinging and make several arguments, including: conclusions should be based on corrected coefficients; coefficients are on par with other frequently used predictors, some of which are much more costly to develop (e.g., assessment centers, biodata); different combinations of Big 5 factors are optimal depending upon the occupation; and compound personality variables should be considered (e.g., integrity). Suggestions include developing more other-ratings instruments and investigating non-linear effects (hallelujah), dark side traits, and interactions. They sum up: "Any selection decision that does not take the key personality characteristics of job applicants into account would be deficient."

Not to be out-pulpited (yes, you can use that phrase), Morgeson et al. come back with a response to the above two articles, reiterating how correct they were the first time around. They state that much of what the authors of the above articles wrote was "tangential, if not irrelevant", that with respect to the ideas for increasing coefficients, "the cumulative data on these 'improvements' is not great", and that corrected Rs presented by Ones et al. aren't impressive when compared to other predictors. They point out some flaws of personality tests (applicants can find them confusing and offensive) but fail to mention that ability tests aren't everyone's favorite test either. They claim that job performance is the primary criterion we should be interested in (which IMHO is a bit short-sighted), and that corrections of coefficients are controversial.

So where are we? Honestly I think these fine folks are talking past each other in some respects. Some issues (e.g., adverse impact) don't even come up, while other issues (e.g., faking) are given way too much attention. It's difficult to compare the arguments side by side because each article is organized differently. It doesn't help that the people on both sides are some of the researchers with the most invested (and most to lose) by arguing their particular side.

I'm thinking what's needed here is an outside perspective. Here's my two cents: this isn't an easy issue. Criteria are never "objective." Job performance is not a singular construct. Job complexity has a huge impact on the appropriate selection device(s). And organizations are, frankly, not using cognitive or ability tests nearly as much as they are conducting interviews. So let's stop focusing on which type of test is "better" than the others. Frankly, that's cognitive laziness.

So is this just sound and fury, signifying nothing? No, because people are interested in personality testing. Hiring supervisors are convinced that it takes more than raw ability to do a job. We shouldn't ignore the issue. Instead we should be focusing on providing sound advice for practitioners and treating other researchers with respect and attention.

Should you use personality tests? I'll answer that question with more questions: what does the job analysis say? What does your applicant pool look like? What are your resources like? It's not something you want to use cookie-cutter, but not something you should write off completely.

Okay, I'm off my soap box. Last but not least there are some good book reviews in this issue. One is Bob Hogan's book (which I enjoyed immensely and actually finished which is rare for me), Personality and the Fate of Organizations, which the reviewer recommends; another is Alternative Validation Strategies, which the reviewer highly recommends; and the third for us is Recruiting, Interviewing, Selecting, & Orienting New Employees by Diane Arthur, which the reviewer...well...sort of recommends--for broad HR practitioners.

That's all folks!

Tuesday, November 27, 2007

Old-school competencies

Recently I went with family to the old Empire Mine in the gold country of California. It's one of the oldest, deepest, and richest mines in California. It produced 5.6 million ounces of gold before it closed in 1956.

In the display cases I happened to notice a brief description of the typical tasks and competencies required of a miner (a mini-job analysis if you will):











How would you have selected people for this job? Physical ability test? How about some type of personality test that measured flexibility and stress resistance? Might a realistic job preview have been a good idea?

Note it includes the traditional "what would happen if this job was done incorrectly..." question. This takes on increased meaning when you look at the shaft that the miners traveled down every day on their "way to work":











You may not be able to tell, but that sucker's pretty much straight down.

Suddenly my cubicle's looking a lot better.

Wednesday, November 21, 2007

Applied Workforce Planning: Air Traffic Controllers

There's plenty of disagreement over whether the aging of the U.S. workforce will indeed spark a wave of simultaneous retirements and thus a scramble to replace employees. Personally, I think it's going to depend on a lot of factors--the housing market, health care costs, and technological advancements to name a few.

But there are instances that demonstrate what it could look like if a significant portion of the U.S. workforce retires. One of those instances is the air traffic controllers who work for the Federal Aviation Administration (FAA) and were the subject of a recent piece on NPR. Let's look at the situation and the implications:

- In the summer of 1981, President Ronald Reagan replaced 12,000 striking air traffic controllers all at once. When you make such a large number of hires, in the same classification, with a lot of them probably being around the same age, in public sector (where folks often stick around for a long time), you better be thinking about workforce planning.

- Thirty percent more controllers retired last year than the Federal Aviation Administration (FAA) predicted. That's significant, and whether it's due to insufficient planning or not, it provides an example of the scope of the problem we're facing.

- Last year the FAA imposed a new labor contract on the controllers which lowered pay for new hires, froze pay for those with longevity, and placed new restrictions on the working environment. Not a great way to attract new candidates into a field where you need a high number of replacements in a short period of time.

- According to the union, employees feel stretched and burned out, which may lead to a serious accident. Not the type of publicity that demonstrates an outstanding employer brand.


Lessons? Like politics, all workforce planning is local. And yes, it's an inexact science. But sometimes the numbers just hit you over the head and demand attention. Do you have a situation like this in your organization? If so, what are you doing about it?

Thursday, November 15, 2007

You don't know you

In a recent interview with Gallup Management Journal, Cornell psychologist David Dunning talked about why people aren't very good at judging themselves. Why is this important? Because it has a great deal to do with how we recruit and assess applicants.

A big part of the reason we're so bad at accurately judging ourselves is due to our self-serving bias--our tendency to take credit for successes but blame outside factors (other people, equipment) for our failures. This helps our ego out--if we were always blaming ourselves for failures and attributing success to outside factors, we wouldn't be very happy campers. But it has the downside of oftentimes blinding us to the real reason why things happen.

Dr. Dunning covers a wide range of topics in the interview, including gender differences, when overconfidence may be a good thing, employee training, providing feedback, and the serious implications of this phenomenon (e.g., think about doctors judging their skills as being better than they are).

As I said, this is important because a great deal of recruitment and selection is about self-assessment--a prime example is the growing movement toward online training and experience (T&E) questionnaires made easier with the spread of ATS products. Many of these questionnaires are chock full of questions that (no joke) aren't much different than: "How great are you at X?"

But it's not just about T&Es. People make judgments about themselves when deciding what jobs to apply for in the first place. They describe themselves in certain ways during job interviews (when the motivation to make yourself look good is even stronger).

What can we do about it? Simply put, verify, verify, verify. If someone claims to be the greatest Java programmer on the planet, make them show you. If they claim to be a great orator, make an oral presentation part of the hiring process. Then talk to folks that know their work to establish a history of competence. Don't take someone's word at face value because (a) they may be trying to snow you, but more subtly (b) they may not know themselves.

By the way, Dr. Dunning is co-author of one of my all-time favorite articles, "Flawed self-assessment: Implications for health, education, and the workplace" which describes how the inability of people to judge themselves accurately can result in very serious problems.

Last thing: if you're not already familiar with it, check out the fundamental attribution error, which is one of the other big things our brain is constantly doing. It has huge implications for how we judge others.

Tuesday, November 13, 2007

Rigorous assessment pays off

It's great when the mainstream press gets assessment right. It doesn't happen a lot, so I want to make sure to point out a good example.

Ellen Simon (AP) devoted a recent article to employers that, even in a tight labor market, put job applicants through the paces.

Some of my favorite bits from the article:

- Employers that recognize their employees are an integral part of their brand. If your employees are unhappy, not trained, or otherwise a bad fit, customers (and potential applicants) notice.

- This quote from Rackspace Managed Hosting CEO Lanham Napier: "We'd rather miss a good one than hire a bad one." Without getting into Type I versus Type II errors, let me just say that Mr. Napier demonstrates the wisdom of someone who's seen what a bad hire can (or can't) do. (Check out their refreshingly simple career portal)

- The fact that Rackspace interviews last ALL DAY. Yep, all day. In this age of "I only have 30 minutes for the interview", that's darn refreshing.

- The wonderful use of realistic job preview videos by Lindblad Expeditions that show employees cleaning toilets and washing dishes. Says Kris Thompson, VP of HR, "If they get on board and say, 'This is not what I expected,' then shame on us." Check out how their online preview video combines push with pull.

I don't agree with everything in the article--I'm not a big fan of the idea of secretly judging people on their waiting room behavior--but all in all some great examples here to recognize.

(by the way, the HBR article Simon cites, called "Fool vs. Jerk: Whom Would you Hire?" is here.

Wednesday, November 07, 2007

The changing requirement for war

The competencies required in the modern workplace change seemingly on a daily basis. Whereas computer skills were once a "nice to have" competency, now at least moderate computer proficiency is a "must have" for many (if not most) jobs. Whereas the ability to collaborate frequently with others may have been an infrequent requirement, now it is a prerequisite for many occupations.

Many times these changes are required to take advantage of new technologies and to keep up with competitors, and for the most part the transition works. On the other hand, sometimes we plan for one change in competencies when we should have gone another direction.

An excellent demonstration of this is the military. In this article from The Economist titled Armies of the future, the author describes the vision of former U.S. Defense Secretary Donald Rumsfeld:

The army's idea of its “future warrior” was a kind of cyborg, helmet stuffed with electronic wizardry and a computer display on his visor, all wirelessly linked to sensors, weapons and comrades. New clothing would have in-built heating and cooling. Information on the soldier's physical condition would be beamed to medics, and an artificial “exoskeleton” (a sort of personal brace) would strengthen his limbs.

At first, with initial successes in Afghanistan and Iraq, it seemed this vision would be validated. But as the nature of the warfare changed, so did soldier requirements. The current person in charge of the war in Iraq, General David Petraeus, has co-authored a new manual on counter-insurgency. According to the article:

Counter-insurgency, [the manual] says, is “armed social work”. It requires more brain than brawn, more patience than aggression. The model soldier should be less science-fiction Terminator and more intellectual for “the graduate level of war”, preferably a linguist, with a sense of history and anthropology.

This has huge implications for how the military will recruit, assess, and train soldiers, and we can argue about whether this shift could have been predicted or not, but there are some clear lessons here:

1 - Job analysis is important, and don't just do it once and put it in a file. Not that we needed any more evidence of this, but the example above vividly demonstrates how critical it is to carefully study a job (and keep studying it) to inform recruitment and assessment. Of course simply doing a job analysis doesn't guarantee success, as I'm sure there was a not-insignificant amount of thought that went into the "future warrior" idea.

2 - Look before you leap. Plan for how you will select people and train existing staff. The article doesn't go into this in depth, but one big implication for such a significant shift in competency requirements is what to do with existing staff. We've all been there--we implement a new software program, we require everyone to start writing or presenting more--and sometimes it works, sometimes it doesn't. Is our success due to the skill level of existing staff? Our careful planning? Or dumb luck?

When it comes to ensuring people are successful at what they do, fortunately we don't have to rely on luck. But we do have to devote time, resources, and careful attention to doing recruitment and assessment right. The consequences are important--whether they're saving lives or helping an organization be successful.

Monday, November 05, 2007

Jobfox members look upward

Jobfox is a job site I've posted about before that is making an attempt to more accurately match candidates with employers. The idea is to allow candidates to describe themselves in detail, including their work preferences, then have employers seek them (hence their motto, "be the hunted.")

Speaking of work preferences, one of the features the Jobfox offers is the ability for candidates to select up to five features of a job that they value the most--things like 401k matching, unstructured environment, and work/life balance.

Since Jobfox has all this information on people, they recently posted an analysis of results of over 6,000 registered job seekers. The press release focuses on the dearth of "green" factors people are looking for (e.g., looking for a company that is ecologically friendly), but to me the take home is about career advancement. Take a look at the top six desired job qualities:

Advancement opportunity (55%)
More leadership responsibility (41%)
Work/life balance (38%)
Leadership that's respected/admired (36%)
Sense of accomplishment (36%)
Higher salary (28%)

Notice that half of these, including the two most popular, are related to moving up in an organization.

The other result of note has to do with another kind of green (in the U.S. at least). Look at where salary is--down at #6. This suggests (and smart organizations know) there are ways of attracting and retaining talented folks simply by offering ways for people to take on increased responsibility and leadership opportunities, or restructuring the job (which might also help with that sense of accomplishment).

Monday, October 29, 2007

October '07 ACN

The October 2007 issue of the Assessment Council News is out with two great articles:

First, Dr. Mike Aamodt tackles the issue of validity co-efficients with Beauty May Be in the Eye of the Beholder, but is the Same True of a Validity Coefficient? (begins page 2)

In the article, Dr. Aamodt gathers data from experts in the assessment community on questions such as:

- Is there a minimum value for a validity coefficient that would generally be accepted by testing experts? If so, what is it?

(includes a great table summarizing where certain validity coefficient values have been referenced)

- What is the lowest uncorrected validity coefficient that you believe would indicate that an inference from a test has acceptable criterion validity?

- If a validity coefficient is statistically significant, is that enough to imply job relatedness?

The second article is by Natasha Riley and covers a topic near and dear to our hearts--Unproctored Internet Testing--The Technological Edge: Panacea or Pandora's Box? (page 11)

If the title seems a little odd and/or familiar, it's because it's a combination of various presentation titles from the 2007 IPMAAC conference where unproctored Internet testing was a hot topic. In the article, Natasha covers some pros and cons of this type of testing and describes how Riverside County in California is having some success with it.

Remember, internet-based testing does not have to be about selecting out. It can be about giving candidates tools they can use to determine whether they would be a good fit. Cheating is removed as an obstacle when we eliminate the motivation. Consider giving the applicant the "exam" and have them determine whether they want to move forward given their results and how they compare to successful job incumbents.

Anyway, kudos to IPMAAC for another illuminating issue of ACN. Keep 'em comin', and folks watch out for the call for proposals for the 2008 IPMAAC conference! (to be announced shortly)

Wednesday, October 24, 2007

October '07 TIP: Alternatives and Title VII

You legal buffs out there know that under Title VII of the Civil Rights Act of 1964 (as amended in 1991) there exists a "burden shifting" framework that lays out how an employment discrimination case (hypothetically) proceeds:

1 - The plaintiff must show that the employer is using a particular employment practice (e.g., a selection test) that results in disparate (or adverse) impact against a legally protected group; if successful,

2 - The employer must show that the practice was/is job related for the position in question and consistent with business necessity; if successful,

3 - The plaintiff must show that there is an alternative employment practice (e.g., a different type of test) that would serve the employer's needs, be equally valid, and have less adverse impact and the employer refuses to adopt it. The classic case is plaintiffs suing over a written knowledge test and suggesting a personality or performance test should have been used.

You may also know that plaintiffs rarely win employment lawsuits (for many reasons, but one of which is employers are getting better at #2 above), and there seems to be a shift toward the third prong of the case--showing that there are alternative testing mechanisms out there that are equally as valid and with less adverse impact.

The October issue of the Industrial-Organizational Psychologist (TIP) contains two articles (both by individuals who have served as expert witnesses in discrimination cases) that touch on this subject and are worth a read:

Slippery slope of "alternatives" altering the topography of employment testing? by James Sharf

and

Less adverse alternatives: Making progress and avoiding red herrings by James Outtz

Also in this issue, a great analysis of the recent U.S. Supreme Court ruling in Parents v. Seattle School District by Art Gutman and Eric Dunleavy that reviews in detail the current status of affirmative action.

Monday, October 22, 2007

Looking far and wide

When it comes to finding talented individuals, how far and wide do you look? Are you as creative as you could be?

In a recent article James E. Challenger, of the outsourcing firm Challenger, Gray, and Christmas, described the results of a new study in which half of the 100 HR executives polled stated their companies work informally with former employees; 23% considered stay-at-home parents to be valuable recruiting targets. The goal? Finding folks that are experienced and easy to train.

What does your organization do when people leave? Does it go beyond getting a forwarding address? How about following a structured approach to keep track of talented individuals in case their next job fizzles?


Challenger cites Lehman Brothers as a leader in this area with its Encore program, which according to the website is "designed to facilitate networking and professional development opportunities for individuals interested in resuming their careers in the financial services industry. Ideal candidates are women and men, preferably with industry-related experience at a vice-president level or above, who have voluntarily left the workforce for a year or more."

Does your organization actively recruit people that have been out of the workforce for a year or more? Or are these people seen as "stale"?

The article also includes "resources for returning parents", including:

UCEAdirectory.org (searchable database of continuing education courses)

Meetup.com (real-world social networking)

Modernmom.com (advice on activities and work-life balance)

Showmomthemoney.com (money tips, degree links, and more)

Ladies Who Launch (networking and entrepreneurial advice)

Has your organization considered recruiting efforts that target these types of groups? Or is it hoping that qualified applicants find you?

Some things to think about as we all work on being more creative with reaching out to all qualified candidates. I bet there are a lot of folks out there that would love to see a list of employers willing to hire returning workers (as well those that are open to part-time arrangements).

Wednesday, October 17, 2007

Increasing response rates to web surveys

I'm guessing I'm not the only one that has been using web surveys more and more (I'm a big fan of SurveyMonkey), and we'll probably all do more in the future. That's why this recent bit of research by Dr. Thomas Archer is so valuable. It's titled, "Characteristics associated with increasing the response rates of web-based surveys" and it's based on results from 99 various web-based surveys over a more than two-and-a-half year period using Zoomerang.

The results were somewhat surprising (to me at least). The length of the questionnaire wasn't particularly important in terms of response rate. This included both the number of open-ended questions and the length of rating scales. Instead the challenge is getting people to the survey in the first place.

How do we do that? The author recommends several strategies:

1 - Leave the questionnaire open for a while (say, three weeks), and send out a couple reminders along the way.

2 - Pay attention to how you write your survey invitations. They should be written at a low grade level in terms of readability.

3 - Make it clear to survey participants "what's in it for them" (e.g., you'll get a copy of the results).

If you haven't played around with web-based surveys, I'd encourage you to. They're very easy to learn and typically inexpensive.

I'd be interested in seeing if anyone out there is using web-based surveys as part of their recruitment/assessment process? Seems like a natural fit.

Saturday, October 13, 2007

Depressed workers

A new report by the U.S. Department of Health and Human Services' Substance Abuse and Mental Health Services Administration (SAMHSA; say that five times fast) states that an average of 7% of U.S. workers age 18 to 64 experienced a major depressive episode (MDE) in the past year. The data is based on surveys conducted of full-time workers in 2004 and 2006.

Importantly, the rate of MDEs varied by occupation, age, and gender:

- The highest rates of depression were found among personal care and service occupations (10.8%) and food preparation and serving related occupations (10.3%).

- The highest rate among female workers was in personal care and service (a whopping 14.8%). The highest rate among men was in arts, design, entertainment, sports, and media occupations (a much lower 6.7%).

- Lowest overall rates were found in engineering, architecture, and surveyors (4.3%), life, physical, and social sciences (4.4%), and installation, maintenance, and repair (4.4%).

- The highest rate among those age 18-25 was in healthcare practitioners and technical (11.9%); lowest was in life, physical, and social sciences (4.3%).

- The highest rates among those age 50-64 were in financial (9.8%) and personal care and service (9.7%) while the lowest rates were in sales and related (3.6%) and production (3.7%).

(Keep in mind when looking at occupational differences that women typically report higher levels of depression, and depression rates decrease with age)

Implications? Keep in mind these statistics if you have people in these occupations in your organization. Depression will impact productivity and your recruitment efforts, let alone any other processes or initiatives that require high levels of employee energy and involvement. Wellness efforts specifically targeting individuals in these occupations may pay bigger dividends.

The report includes the fact that U.S. companies lose an estimated $30-40 billion dollars a year due to employee depression.

Thursday, October 11, 2007

September '07 issue of J.A.P.

The September 2007 Journal of Applied Psychology is out and it's got some research we need to look at...

First, Hogan, Barrett, and Hogan present the results of a study of over 5,000 job applicants who took a 5-factor personality test on two separate occasions (6 months apart). Only 5% of applicants improved their score on the second administration, and scores were equally as likely to change in a negative direction as a positive one. The authors suggest that given these results, faking on personality measures is not a significant issue in real-world settings. Comment: This is faking defined as improving your score to match the job, not faking as misrepresenting yourself consistently over time. Also, I can't help but think of the recent article by Morgeson et al. in Personnel Psych where they argued that faking isn't the issue; it's the low validities we should be concerned about.

Next, De Corte, Lievens, and Sackett present a procedure designed to determine predictor composites (i.e., how much each testing method should be "worth") that optimize the trade-off between validity and adverse impact. The procedure is tested with various scenarios with positive results. You can actually download the executable code here and an in-press version of the article is here (thank you, Drs. De Corte and Lievens!).

Speaking of validity, the next article of interest is by Newman, Jacobs, and Bartram, and looks at the relative accuracy of three techniques for estimating validity and adverse impact (local validity studies, meta-analysis, and Bayesian analysis). The authors describe which method is optimal in different conditions, using measures of cognitive ability and conscientiousness as predictors. They even toss in recommendations for how to estimate local parameters.

Next, a fascinating and useful study of counterproductive work behaviors (CWBs; things like theft, sabotage, or assault) by Roberts, Harms, Caspi, and Moffitt that tackles the issue from a developmental perspective. Using data from a 23-year longitudinal study of 930 individuals, the authors found that individuals diagnosed with childhood conduct disorder were more likely to commit CWBs as young adults. On the other hand, criminal convictions occurring at a young age were unrelated to CWBs demonstrated later on. Job conditions and personality traits had their own impact on CWBs, above and beyond background factors. Great stuff, especially for those of you with a particular interest in biodata and/or background checks.

Last but not least, a study of person-organization (P-O) fit by Resick, Baltes, and Shantz. Using data from 299 participants in a 12-week internship program, the authors found that the relationship between P-O fit on the one hand and job satisfaction, job choice, and job offer acceptance on the other depends on the type of fit (needs-supplies vs. demands-abilities) as well as the conscientiousness of the individual. Good food for thought when thinking about P-O fit, a consistently popular concept.

Honorable mention: This meta-analysis by Humphrey, Nahrgang, and Morgeson of 259 studies that investigated work design impacts on worker attitudes and behaviors. Think behavior is determined solely by individual ability and disposition? Ya might want to take a gander at this study; it'll change your tune. A great reminder that satisfaction and performance are the result of both the individual and his/her work environment. Also available here.

Monday, October 08, 2007

Checkster and SkillSurvey automate reference checking

When it comes to things that supervisors (and, frankly, HR) don't look forward to, reference checking probably ranks in the top 5. Checking references is time consuming and difficult to do well, because many references refuse to do more than confirm name, job title, salary, and employment dates for fear of getting sued. This is unfortunate, since lawsuits in this area are quite rare and reference checks can be a great source of information.

So it was with no small amount of excitement that I discovered two web-based services that are automating the reference check process--Checkster and SkillSurvey. The basic idea behind these services is that a brief survey consisting of both rating scales and open-ended questions is sent out electronically to references; responses to these surveys (generally at least 3) are combined by the services into an overall report for the employer. While the services are substantially similar, in this post I'll give you a brief overview of each.

Checkster

Checkster is the brainchild of CEO Yves Lermusi, formerly of Taleo (and frequent contributor to ERE). Lermusi noticed that the frequency and quality of the performance feedback most people receive drops dramatically when they move from school to work, making it difficult for people to understand their strengths and areas for development. To help remedy this, he developed Checkster to be a "personal feedback management tool"--a focus that he says distinguishes it from other services, whose bread and butter is employer-based reference checking. Applicants receive the results of the reference check, just as the employer does, with the idea that this information will be used to help people develop and make better decisions regarding their career.

With Checkster, the employer simply enters the name and e-mail address of each applicant along with the requisition and selects the type of survey to be delivered (Checkster also has a 360-degree survey). That's it. Simple, eh? The applicant takes it from there, logging into Checkster and entering reference names and contact information. References have 7 days to take the quick and confidential survey, and Checkster compiles the resulting information into a report after at least three responses have been collected. From the employer's side, a simple account screen allows you to manage your requisitions and see the status of each. You can see an overview of how it works here, and watch a demo here that includes pictures of a report.

Big bonus: Checkster also has a free employment verification feature which will send an e-mail to previous employers to verify dates of employment, reporting structure, compensation, and eligibility for rehire.

Price: $50 per requisition, which allows you to check references for up to five candidates with a maximum of 15 references per candidate (volume discounts are also available).

SkillSurvey

As I mentioned, SkillSurvey and Checkster work in a similar fashion--the employer enters candidate information, the candidate enters reference information (or the employer can), references evaluate the candidate, and SkillSurvey generates the report.

Differences between Checkster and SkillSurvey that I observed:

1) SkillSurvey allows you to choose different types of surveys depending on the job. Each includes competencies developed by SkillSurvey staff. For example, there are different surveys for sales positions, IT positions, and HR positions (click here for an example for Marketing Manager).

2) Each point on SkillSurvey's rating scale is anchored, which could potentially lead to better reliability.

3) SkillSurvey reports are not automatically available to the candidate (unlike Checkster)--this reflects the emphasis that SkillSurvey places on being primarily a tool for the employer, versus Checkster's focus on individual development.

4) SkillSurvey has a sourcing component built in--you can download a spreadsheet that contains all the information on reference-givers that you can sort and use to identify applicants (very cool).

5) Checkster's reports are a little shorter and more graphical, while SkillSurvey reports are more text-based and extensive.

6) In terms of customization, SkillSurvey offers many options for altering things like turnaround time, and even weighting questions.

7) The actual text that goes out from candidates is easier to modify using Checkster.

A SkillSurvey overview video with screens of surveys and reports is available here, and sample reports are here. They even have a blog written by Doug LaPasta, their founder and chairman.

Price: $59 for one candidate with significant discounts for volume; usually charged in units of 100 candidates. The employer controls the number of reference givers required for completion of the report (anywhere from 2-15).

By the way, SkillSurvey was selected as a Top HR Product for 2007 by Human Resource Executive.

Summary

Both products have the potential to dramatically decrease the amount of time spent checking references, and have the added benefits of standardization as well as indicating an affinity for technology. Both companies have taken steps to ensure reference givers feel comfortable giving out information. The information may also be of higher quality since the process is being handled by a third party.

Both services were extremely easy to use. I found representatives from both companies to be knowledgeable and helpful. I'm sure as both products mature we'll see great additions, including hopefully an increased ability to gather off-list checks and even more options for tailoring the surveys.

Some things to keep in mind: (1) Like all mass-mailing type services, make sure e-mails from these companies don't get blocked by firewalls; and (2) Because some candidates may provide false references, do periodic spot checks (e.g., by verifying name & e-mail address).

I hope this has peaked your interest; I suggest checking out both products to see if either would make your life easier!

Thursday, October 04, 2007

OPM Introduces Assessment Decision Tool

The U.S. Office of Personnel Management (OPM) continues to innovate in the area of web-based assessment offerings with the introduction of the Assessment Decision Tool.

This interactive application allows you to enter competencies required for a particular position for seven occupational groups, including:

- Clerical/technical
- Human resources
- IT
- Leadership occupations
- Professional/administrative
- Science/engineering
- Trades/labor

After entering in this information, the system will create a matrix that identifies which assessment tools would be the best fit for each competency, then gives you totals so you can see which methods might be best overall. These matches were identified by a panel of OPM psychologists who specialize in personnel assessment.

The final step is the generation of a comprehensive report that summarizes the position competencies, assessment matches, and information about each assessment method. There is also a reference section for those wishing to investigate further. The report can be downloaded in HTML format and OPM is working on PDF functionality as well.

A great idea that automates competency-exam linkages that many assessment professionals do routinely. Now if they can just have the system create the actual tests...

Thursday, September 27, 2007

Autumn 2007 Personnel Psychology

The Autumn 2007 issue of Personnel Psychology is out with plenty for us to sink our teeth into, particularly for you personality testing fans out there. Let's take a look:

First up, Luthans et al. present the results of a study that focuses on positive psychology, which is gaining more and more interest these days. The authors describe support for a survey instrument that purports to measure four aspects of "positive psychological capital"--hope, resilience, optimism, and efficacy--and then looked at whether results predicted job performance and satisfaction. Results? A "significant positive relationship", with the composite of the four aspects outperforming each individually. (Side note: two of the authors published a book last year that focuses on this topic)

Next, Judge & Erez look at how two of the Big 5 personality dimensions--emotional stability and extraversion--predicted job performance at a health and fitness center. Not only did both predict performance on their own, but they did even better in combination. The authors suggest that the combination of emotional stability and extraversion reflects a "happy" or "buoyant" personality that may be more important to predicting performance than each trait in isolation. Great study that goes beyond the "which of the Big 5 are the best" mentality.

Next up, Buckley et al. with a study of race and interview panels. Ten White and ten Black raters viewed videotaped responses of 36 White and 36 Black police officers applying for a promotion. Results? Well, there's good news and bad news. The bad news is that there was a same race bias (i.e., White raters rated White applicants better, Black raters rated Black applicants better) and a significant difference between the panels depending on the ethnic makeup. Good news? The effect size was small and "net reconciliation" (the difference between initial and final scores) was significant (but small) only among Black raters.

Pay attention to the next study, recruiters: Zhao et al. present the results of a meta-analysis on the impact of psychological contract breach on 8 work-related outcomes, including attitude and individual effectiveness. An example of contract breach: telling an applicant you have great work-life balance policies and then never approving leave. So what did they find? Breach was related to all eight outcomes except for actual turnover. Affect mediated this relationship, which suggests to me that if you have to break a contract, you may be able to somewhat manage the impact by being smart about how you present it and being sensitive about the reaction.

Next up, a bevy of big names in the field (let's just call them "Morgeson, et al.") drop a bombshell on personality testing: they argue that because of the low validities associated with self-report personality measures, they should be discontinued for personnel selection! They don't write personality tests off completely, but suggest that alternatives to self-report measures need to be developed (someone may want to tell Judge & Erez; see article above). What might this look like? Conditional reasoning tests are mentioned as a possibility. And, (this is just me talkin') "ability" type measures could be developed (e.g., if you're conscientious you should be able to demonstrate certain behaviors) or we could integrate personality measurement into the reference checking process (hey, I didn't say it would be easy). Oh and hey, here's the article if you're interested; thanks to Dr. Morgeson for making so much of his work available.

Ironically (or is it coincidentally? curse you, Alanis Morissette), the very next article is about the development of a new self-report personality measure, the Five Factor Model Questionnaire. Gill & Hodgkinson criticize existing measures (e.g., they contain too many generic items, they use culture-specific language) and find support for their measure using five separate diverse samples, including close convergent and divergent validity with the NEO PI-R.

So that's the end of the research articles, but not the end of this journal issue. It also contains reviews of several books, including:

- Using individual assessments in the workplace: A practical guide for HR professionals, trainers, and managers by Goodstein and Prien (which looks to be a very useful introductory guide, along the lines of Aamodt et al.'s statistics book)

- Foundations of psychological testing: A practical approach (2nd ed.) by McIntire and Miller, which is designed for an undergraduate-level course.

- and for those of you looking for something a little more advanced, a review is also included of Dr. Viswanathan's Measurement error and research design.

Monday, September 24, 2007

Are individuals liable for employment discrimination?

A common question I hear from supervisors and HR professionals is: "Am I personally liable for employment discrimination when I make a hiring decision?"

This recent article deals with a California Supreme Court decision but covers the answer to this question generally.*


* Short answer: it's rare (except for Section 1981 or 1983 claims** and failing to verify employment eligibility***) but you may be named anyway as a tactic on the part of the plaintiff.



** Which can be particularly nasty since there is no cap on damages and no administrative requirement (like filing with the EEOC). On the other hand it is more difficult for plaintiffs to prevail in these cases, and it's only relevant in cases of disparate treatment.



*** Okay, this might be nastier because you could face jail time. Don't forget those I-9s!