Wednesday, February 28, 2007

Anxiety Impacts Test Scores

One of the (many) news articles that came out of this years' AAAS meeting has to do with anxiety's impact on test performance.

Specifically, a panel titled "Interplay of Emotion and Cognition: Implications for Learning and High-Stakes Testing" included a presentation by Dr. Mark Ashcraft on "How Math Anxiety Compromises Performance: The Role of Working Memory."

Dr. Ashcraft (note the article has this misspelled "Ashcroft") noted that math anxiety takes up working memory that is particularly important for complex math problems.

I'm not quite sure how "math anxiety" differs from plain 'ol test anxiety, but it looks like Ashcraft has written quite a bit about it.

Frankly, this finding is not particularly surprising given all we know about stress' impact on the body, but it does serve to highlight that there are many things that can introduce error variance into someone's test score.

So what can we do about it? The research suggests that orientation/training programs can help with reducing people's anxiety. There are also simple steps you can take as part of the testing process, such as:

- Letting people know exactly what the testing process will be like

- Being available for questions about the process

- For interviews, doing things like offering water, pre-exposing some (or all) of the questions, and telling the candidate to take their time

- Using multiple assessment methods so if a candidate "blows it" on one method they can shine elsewhere

On a related note, although taking these steps can help, they probably won't decrease any differences typically observed between different groups (e.g., men/women, White/Black).

Speaking of group differences, one of the other presenters on the AAAS panel was Dr. Joshua Aronson (of Steele & Aronson fame), who presented on "Stereotype Threat and Sex Differences in Math Performance: New Findings"--looks like he's done some research lately on this topic.

Tuesday, February 27, 2007

Q&A #3: Dr. Charles Handler

This is the third in a series of Q&As I'm doing with thought leaders in the area of recruitment and selection.

This edition features Dr. Charles Handler, founder and president of Rocket-Hire, a consulting firm "dedicated to helping organizations use technology and best practices to build effective, legally sound employee selection systems." Dr. Handler is also very active professionally and is a frequent contributor to ERE.

See if you can spot the similarities between Dr. Handler's responses and those of the previous two Q&As...

(Note: as usual, links are provided by yours truly)

BB: What do you think are the primary recruitment/assessment issues that employers are struggling with today?

CH: I believe that the primary issue being faced is understanding how to find applicants with the traits desired by the organization and how to keep them. There is a shortage of talent and persons do not stay in jobs as long as they used to. So, finding folks who have what it takes and keeping them long enough so they can provide a contribution is very difficult.

BB: What is an example of an innovative or creative recruitment/assessment practice that you've seen recently?

CH: Thinking about using virtual worlds such as Second Life as venues for employment branding and recruiting is the most interesting thing I have seen done as of late.

BB: What is an area of research that you think deserves increased attention?

CH: Moving beyond thinking about validating tests towards a broader viewpoint that takes into consideration looking at relationships in data collected as part of the recruitment/hiring process and key organizational outcomes. A broad focus on business intelligence is going to be key for understanding the value of hiring in terms of organizational goals and outcomes.

BB: As someone who has their own consulting business, have you noticed any changes or patterns in the types of requests you're getting from clients?

CH: No, just more interest in using assessment and using it correctly.

BB: Have you read any books or articles lately that you would recommend to the professional community?

CH: The tipping point. Cant recall the author's name but it was a good one.

BB: Is there anything else you think recruiters/assessment professionals should be focused on right now?

CH: Doing it right!!! Taking the time to understand key performance drivers before choosing an assessment, using good quality assessments, and measuring the impact they have on key outcomes. These are the basics and they are still not being given the attention they deserve.

Thank you Dr. Handler!

Monday, February 26, 2007

Personality testing basics: Part 2

There continues to be great interest in using personality tests for personnel selection. In Part 1 of this series I reviewed some of the major modern research on personality testing. In this post I will review some of the major commercial personality tests that can be used for selection.

This is by no means an exhaustive list, but these tests have been around for a while, have generally withstood professional scrutiny, and are used by many organizations as part of their hiring process. While not all of them are strict measures of the Big 5, all are based or strongly related to those factors. I mentioned that while not everyone agrees that the Big 5 is the best way to describe normal personality, it does have considerable support.

Below I have provided a brief description of these tests along with any evaluative comments (if I had them) from the Mental Measurements Yearbook (MMY), one of the few objective reviewers of commercial tests. All can be administered via computer or paper-and-pencil and vary somewhat in price (approximately $10-30 per test--contact publisher for details).

The Tests

1. Hogan Personality Inventory (HPI) - The HPI, created by the well-known Dr. Robert Hogan, is based on the Big 5 but actually has seven scales--Adjustment, Ambition, Sociability, Interpersonal Sensitivity, Prudence, Inquisitive, Learning Approach (note that several names have changes in the last few years). The HPI consists of 206 true/false items that according to Hogan Assessment Systems are written at a 4th grade reading level and take 15-20 minutes to complete.

MMY (1998) quotes on HPI:
"...[the HPI] appears to be a theoretically sound, carefully conceptualized, and well-validated instrument offering practical utility for organizations."
"...[the HPI] is a valid and reliable test..that is recommended for use as part of a battery in vocational and personnel selection settings.

2. 16PF - The 16PF is published by IPAT and is based on the research of Dr. Raymond Cattell, one of the most famous psychologists who (among other things) is credited with theorizing the existence of fluid and crystallized intelligence. The 16PF measures sixteen "primary factors" as well as the Big 5, contains 185 three-choice items, and IPAT states it is written at a 5th grade reading level and takes 35-50 minutes to complete.

MMY (1995) quotes on 16PF:
"This psychometrically sophisticated measure is a valuable contribution to the testing repertoire of counselors and clinicians."
...this well-known research instrument has stood the test of time and is supported by a vast body of data."

3. Personal Characteristics Inventory (PCI) - The PCI is a measure of the Big 5 (and subscales) and was developed by Drs. Michael Mount and Murray Barrick, authors of a very influential study on personality testing. It consists of 150 three-choice items that Wonderlic claims are written at 3rd-4th or 8th-9th grade level depending on the scale, and takes about 25-30 minutes to complete.

MMY (2005) quotes on PCI:
"The authors have devised a simple, clear measure of [the Big 5] that may be helpful in some personnel work. Presently available evidence suggests more development work would be necessary...before this reviewer could recommend its use."
"...a good first step in the development of a personnel selection instrument...There are some important reliability and validity issues the authors must address in their revisions."

4. NEO Personality Inventory-Revised (NEO PI-R) - The NEO PI-R is one of several tests of the Big 5 offered by Psychological Assessment Resources (PAR). The test authors, Drs. Paul Costa and Robert McCrae, are well-known researchers and authors of several influential studies on personality testing (hey, who said psychological research doesn't pay?). The test (available in both self-report and observer-report versions) contains 240 items that PAR claims are written at a 6th grade reading level; the test takes 35-45 minutes to complete.

MMY (1995) quotes on NEO PI-R:
"These scales should be considered a standard set of useful tools for personality assessment and may provide a useful bridge between basic research in personality psychology and applied psychology."
"...a reliable and well-validated test of personality instrument that represents a comprehensive operational translation of the Five Factor Model of personality."

So which do I use?

Good question, and there's no easy answer. My advice? First analyze the job(s) and make sure that a test of the Big 5 makes sense given job requirements. Then contact the test publishers and get demonstrations (they should all be willing to do this)--this will allow you to get a feel for the exam as well as the customer service offered by the company. Whatever you do, don't try coming up with a personality test yourself!

Saturday, February 24, 2007

Watson Wyatt: Attending to Hiring Makes a Difference

In a review of HR practices at 50 large U.S. companies, Watson Wyatt found that paying attention to the details of the hiring and orientation process is correlated with financial performance and employee engagement.

For example:

- 65% of companies with a highly engaged workforce provide interview training for their managers compared to 33% for companies with a less engaged workforce

- Companies with highly engaged workers spend 35 weeks on new hire orientation versus 15 for companies with less engaged employees

- 52% of high financial performers explained to new employees why they were hired vs. 29% of low financial performers

Very low hanging fruit with big potential payoffs.

Friday, February 23, 2007

Job Analysis: An Ounce of Prevention...

Anyone who needs to be convinced of the importance of job analysis might be interested in this settlement announcement from the EEOC. The lawsuit stems from discrimination complaints against Woodward Governor, an engine system and parts company. The complaints claimed Governor was discriminating against certain ethnic groups and women in pay, training, and promotions in violation of both Title VII and Section 1981 of the U.S. Code.

In addition to having to pay class members a total of $5 Million and be under the watchful eye of a court-appointed expert, the consent decree:

"requires that Woodward utilize an industrial organizational psychologist to perform an analysis of production jobs."

Do you have to be an I/O psychologist to conduct job analysis? Nope. Do you want to be mandated to conduct job analysis after being sued? Nope. Be prepared, make sure you have studied and documented your job requirements.

Thursday, February 22, 2007

Personality testing basics: Part 1

If responses to my survey are any indication (see sidebar on main page), personality testing continues to be a hot topic out there.

There's a lot written about personality testing and a lot of advice given out. It can be difficult to know who to believe and which tests are appropriate for use in personnel selection.

This is Part 1 in a two-part post I will be doing on personality testing. This part covers some of the major research that has been done on personnel selection using personality tests; Part 2 will be an overview of some individual test products.

The Research

Every recruiter and assessment professional should be at least familiar with the major research findings in this area. Here are some of the major modern developments and articles to be aware of:

1. Although historically some personality researchers (e.g., Cattell) felt 16 or more factors were necessary to describe the major elements of personality, researchers in the 1960s found through their analysis that five seemed to do a satisfactory job: Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (memory hint: OCEAN). Confirmation of this so-called "5-factor model" included research by Tupes & Christal (1961; not published until the early 1990s) and Norman (1963).

2. In 1965, Guion & Gottier publish an influential paper stating "taken as a whole, there is no generalizable evidence that personality measures can be recommended as good or practical tools for employee selection." This, combined with the rise of behaviorism, leads to a reduction in personality research until...

3. In the 1980s researchers once again began focusing heavily on individual differences and concepts like temperament and personality. This resulted in the re-emergence of the "Big 5" in, for example, Goldberg (1981; in here), Costa & McCrae (1987), and Digman (1989).

4. In the 1990s several important studies were published, including Barrick & Mount's highly influential 1991 meta-analysis which helped to reinvigorate research into personality testing in the workplace and showed that test results could have significant predictive ability (particularly Conscientiousness). Later that same year, Tett and colleagues publish another influential study verifying the usefulness of personality tests.

5. In 1993 Ones, Viswesvaran, and Schmidt publish research showing the substantial usefulness of integrity measures in predicting work behavior.

6. The 1996 introduction of the International Personality Item Pool (IPIP) by Lewis Goldberg allows personality researchers access to a publicly available personality research instrument.

7. In 1997, Jesus Salgado broadens the scope beyond the U.S. and Canada and publishes research indicating that personality measures, particularly Conscientiousness and Emotional Stability, predict job performance in the European Community.

8. In 1998, Schmidt & Hunter publish one of the most influential personnel research articles in history, meta-analyzing the major assessment methods. The authors state that the correlation values for conscientiousness are "large enough to be practically useful."

9. Hogan & Holland's 2003 meta-analysis shows that when you match what you're looking for (predictor) with what you measure (criteria), predictive ability can be even higher than previously reported, with correlations ranging from .34 to .43.

This is just a sample. Research into personality testing continues to be very popular, with many of the SIOP and IPMAAC conference presentations devoted to the subject. That's part of what makes it one of the most interesting, and controversial, forms of testing.

Wednesday, February 21, 2007

Tuesday, February 20, 2007

When Hiring a Surgeon, Should We Ask If They Play Video Games?

You may have seen this news article today, about a study published in the most recent issue of the Archives of Surgery. The study found a strong relationship between video game experience/skill and performance on a surgical skills test among a sample of 33 residents and attending physicians at Beth Israel Medical Center in New York.

How did the authors come to this conclusion? They had the participants play some games, fill out a survey about their game playing experience, and take a simulation test (interestingly called "Top Gun").

Correlation, causation, whatever

The authors note that "video game skill correlate[d] with laparoscopic surgical skills." But here's where everything goes to heck and those of us who have taught statistics hang our head in shame. One of the fundamental lessons in statistics is that correlation is not causation--just because two things relate to one another doesn't mean one CAUSES the other. My favorite example of this is the positive correlation between ice cream cone purchases and crime rate. Does this mean ice cream cone purchasing causes crime? Or, um, maybe there's a third variable (like heat)?

The news outlets, feeding off the study authors, picked this up and ran with it, suggesting that playing video games somehow causes you to be a better surgeon--or even that kids that play video games could be somehow preparing themselves for a high paying career.

But here's the thing: this study did not have unexperienced video gamers play a game then perform surgery. It's purely point-in-time. All this study showed is that folks that are good at video games are good at the surgical task. Makes sense--they involve some of the same skills. But that's all we can really say. We can't suggest that people play more video games if they want to be better surgeons (and to their credit the news articles generally point this out).

Biodata pops up again

On the other hand, this type of information could be useful for another purpose--biodata. Google has been in the news recently (discussed here and elsewhere) for re-vamping its hiring practices to include biodata. So as part of a hiring practice, questions could be asked about hobbies, one of which could be video games. But you certainly wouldn't want this to be your entire selection process.

On the other hand, with so many kids playing video games these days, is there really going to be a lot of variance in experience 10 years from now for us to use in separating out applicants? Maybe by then we can ask applicants if they've completed Expert level on Surgeon Wars.

Monday, February 19, 2007

Good to Great: Business v. Social Sector

No doubt many of you have read Jim Collins' 2001 best seller, Good to Great: Why Some Companies Make the Leap...and Others Don't.

What you may not have read is a small monograph that was published four years later, titled Good to Great and the Social Sectors.

The line on the front, "Why business thinking is not the answer" tells part of the story. Collins' basic point is that while some lessons of great businesses can be fruitfully translated from private to social sector organizations (e.g., discipline), others cannot (e.g., financial measures of output).

This point has been made before, but the desire to mold social sector organizations into businesses keeps cropping up, and the HR field seems to be fertile ground for this tempting idea.

Tempting because many people, particularly those in the social sector, often feel (a) frustrated with inefficiencies, and (b) hopeful that lessons from the private sector, if adopted, would transform these organizations into lean, productive stars, garnering praise and respect from constituents.

Measuring greatness in the social sector

Collins gives several good examples of how social sector organizations have dealt with one of the most vexing challenges--how to measure performance:

- The New York Police Department went from focusing on "input variables" such as arrests and cases closed to outputs--namely, crime rate.

- The Cleveland Orchestra measures outputs such as number of standing ovations, number of invitations to prestigious festivals, and influence on other orchestras.

How does this relate to recruitment & selection?

One of Collins' big points from the 2001 book is that hiring the right people comes first--or as he states, "First who--getting the right people on the bus." While I would argue that you need to know some of the "what" before you can get the "who" (e.g., what are you looking for?), he makes some sound arguments in the monograph:

- Once the right people start joining the group/organization in large enough numbers, low performers/poor fits often self-select out, like "viruses surrounded by antibodies." (We'll generously ignore the fact that he's comparing people to viruses)

- "There is no perfect interviewing technique, no ideal hiring method." Can't agree with this one enough, although his solution (focus on a probationary period) is easier said than done. I would argue a reasonable amount of time spent upfront on defining the job and being smart about the hiring methods used saves time and energy down the road on poor fits. You wouldn't buy a house based just on a 30 minute tour--why would you hire someone this way?

- "The more selective the process, the more attractive a position becomes." Again, agree with him here. Organizations, whether private or social sector, need to make positions desirable to the highly qualified. Demanding selection procedures haven't kept Microsoft, Toyota, or Google from attracting top talent.

- Last but not least, "the social sectors have one compelling advantage: desparate craving for meaning in our lives. Purity of mission...has the power to ignite passion and commitment." I believe this is what attracts and retains most of the top talent in the social sector, and it's the factor that organizations need to capitalize on.

Social sector organizations aren't making widgets, they're keeping people from having their homes robbed, keeping children from being molested, keeping drinking water safe, etc. Those are powerful outcomes. Use them to recruit and retain the most qualified.

Saturday, February 17, 2007

February traffic report

In the last 30 days...

What are this blog's most popular posts?

Hewitt study quantifies value of high performers

Gallup poll yields lessons for employers

Assessment in the mainstream press

Weddle's top 30 job boards

More on Google hiring

Inbound traffic--what sites are people coming from?

Rocket-hire newsletter


What day of the week do I get the most traffic?


Where are my visitors located?

U.S. (most popular cities: Seattle, Pittsburgh, St. Louis)
India (Mumbai, Delhi)
Canada (Calgary, Toronto)
Israel (Tel Aviv)

What are people searching for that lead them here?

Gallup assessment/test
Google hiring practices
Jobs 4.0
Personnel selection state-of-the-art
Hiring for attitude

Thanks for reading!

Friday, February 16, 2007

Eighth circuit reverses ADA case against Wal-Mart

On February 13 the Eighth Circuit joined several other courts of appeal (e.g., 2nd, 5th, 7th, 9th) in ruling that in Americans with Disabilities Act (ADA) cases where an employer argues a job applicant would pose a "direct threat" to their own safety or the safety of others, the employer has the burden of proof. The case is EEOC v. Wal-Mart.

You may remember this issue from when it made headlines in 2002 in the Chevron v. Echazabal case. In that case the U.S. Supreme Court was looking at whether the EEOC regulation that allows the direct threat defense was in conflict with the ADA; the court decided it was not. (By the way, although Chevron won that round, they actually failed to prevail when the case went back down to the 9th circuit).

In the current case, the court ruled that Wal-Mart failed to show that with reasonable accommodation the plaintiff, who has cerebral palsy, would still pose a direct threat. (Primarily relying on testimony from Wal-Mart's expert witness)

What's the upshot? If you're going to deny someone a job based on a disability, tread VERY lightly. Give the person the opportunity to show how they would perform the (well documented) essential functions of the job, with or without reasonable accommodation. And don't assume the "direct threat" defense is a slam dunk; courts are making very clear that it is not. Those types of decisions need to be based on clear medical evidence (see section (r) at the bottom of this page).

More details regarding this type of case can be found in several places, including here.

Using social networking sites and video

I just attended another good HCI webinar titled "The Next Generation Resume."

Rather than focusing on resumes per se, the presenters covered two main topics:

-Steven Rothberg, President and founder of presented on "Facebook, MySpace, and Other Social Networking Sites: Are They Dangerous, Opportunities, or Both?"

- Peter Altieri, Founder and CEO of RecruiTV/wetjello, presented on "Video Resumes/eprofiles...Fad or Reality?"

Major take-aways

1. There are big generational differences in what is perceived as "acceptable" content. What may seem questionable or offensive to a Boomer (e.g., someone in their underwear posted on MySpace) may seem perfectly normal to a Gen Y'r.

2. Evaluating user-generated content on social networking sites (e.g., as part of a background check) may be perceived as offensive and may do you more harm than good.

3. Because of these issues as well as some potential legal complications (e.g., you can never be sure who posted the content), social networking sites are probably best used for sourcing rather than screening out. George Lenard has posted extensively about this on his site. If you do find some negative content, give the candidate an opportunity to explain.

4. You probably already know this, but of the social networking sites, MySpace dwarfs the competition with an 80% share. Even Facebook, which has been getting a lot of press, only has an 8% share.

5. For an example of how to use something like MySpace for recruiting, check out the page the Marines have created, which has been very successful.

6. Job search engines and social networking sites are hookin' up. MySpace and LinkedIn use SimplyHired, Facebook just hooked up with Jobster, etc.

7. There's no case law on misuse of social networking site information (that the speakers knew about), but it may be coming...

8. If you're recruiting for college folk, Facebook is the way to go, with 90% of students using it. For general recruitments, MySpace is superior, with an average user age of 35.

9. If you're interested in posting video to job search pages (I hope that's a big YES), both Jobster and CollegeRecruiter welcome videos from employers. Vault is another popular option.

10. Great suggestion from Altieri for introducing video into your recruiting process: Ask a supervisor what sort of person they're looking for. Get it on video. Share with second round candidates. Or: Ask them to ID a star player. Interview said player. Share with potentials.

While the presentations are available only to "professional" HCI members, Rothberg's presentation was similar (if not identical) to the one he gave at last year's Onrec conference that Joel Cheesman generously posted.

Thursday, February 15, 2007

Using BLS to Source for Passive Candidates

Whether you're a believer in passive candidates or not, there's no denying that they're a source of talent to be considered.

The challenge, as with all qualified applicants, is finding and attracting them.

Attracting candidates is up to you--you need to figure out how to take advantage of your brand, understand your jobs, have great "salespeople", and make sure your attractive job postings reach the right people.

Finding passive candidates, however...there's something I may be able to help with. You are no doubt familiar with various sneaky or not-so-sneaky ways to identify passives, such as social networking websites or identifying locations where qualified candidates likely hang out.

But what about just answering this question: Where ARE the passive candidates? Turns out that information is relatively easy to get (at least in U.S.). How? By looking at numbers collected by the Bureau of Labor Statistics (BLS).

Show me the numbers

The BLS conducts a semi-annual mail survey to produce estimates of employment and wages for specific occupations through the Occupational Employment Statistics (OES) program. Luckily for us, it's relatively easy to extract nuggets of information from this data.

How? Let's do an example. Let's say I have an urgent need for Registered Nurses. I want to know where they exist in the largest numbers so I know where to target my recruiting, and I'm willing to consider a national recruitment.

1. First, we need to find the link that allows us to create a customized table. Right now the starting point for doing that is here.

2. Next, we select "One occupation for multiple geographic areas."

3. This brings up a big table that allows us to select the particular occupation we're interested in. We know we want Registered Nurses, so we scroll down and select that category (about 3/8ths down the list).

4. Next, we get to select what geographic level of analysis we want. I want the micro level of detail, so I select "Metropolitan area."

5. Next up, we can select what area we're interested in. I want them all, so I leave the default selection of "All MSA in this list" and continue.

6. The next screen has two important choices for us to make. First, we need to decide what specific data we want. I just want to know straight numbers on where folks are employed, so I select "Employment" from the first menu. We then get to choose how we want this data. Because I need to sort the data, I choose to receive it in MS Excel format and continue. This should bring up a download pop-up depending on your browser. I choose to save the file.

7. Now I bring up the file that I saved. I want to sort the employment numbers but some of the cells have been merged. So the first thing I do is select all of the cells, go to "Format Cells", then to the Alignment tab, and de-select "Merge cells."

8. Now I'm ready for the final result. I select Data -> Sort -> Sort by Column B -> Descending. And what do I see? Once we look past the odd repetition of data, we find the top 5 metropolitan areas for Registered Nurses:

- New York/Northern New Jersey/Long Island
- Chicago/Naperville/Joliet
- Los Angeles/Long Beach/Glendale
- Philadelphia/Camden/Wilmington
- Boston/Cambridge/Quincy

So if I'm looking at advertising my vacancy, I'll want to seriously consider these markets--whether that's a newspaper ad, local professional magazine, job board, or whatever.

This is just a basic example. I'm sure you can see how useful this would be given the vast number of occupations BLS tracks, as well as the ability to drill down to various geographic areas. Give it a spin!

Wednesday, February 14, 2007

Reliability and validity--it's okay to Despair(.com)

One of the key concepts in personnel assessment is the distinction between reliability (sometimes called consistency) and validity. I'll say a brief word about both, then get to the fun part of this post.


There are several different "kinds" of reliability. One kind (test-retest) looks at whether someone gets the same score if they take the test at different points in time. Another kind (internal) refers to the extent parts of the test "hang together." For example, if we split your test of Knowledge of Basket Weaving into two halves, would scores on the first half pretty much mirror scores on the second half?


Similar to reliability, there are several "kinds" of validity when it comes to tests. Without boring you to tears, validity essentially refers to the extent to which the results of the test can be interpreted the way you want. If you're looking for someone's math skill, is that what the test is measuring? That's the most common layperson definition of "validity", although there are the traditional concepts of content validity, criterion-related validity, construct validity, and face validity.

Having fun yet? Two final points: you can't have validity without reliability. You can't have a test that measures what you want it to measure if someone gets a wildly different score every time they take it. And the corollary: just because a test is reliable doesn't mean it's valid (more on this in a sec).

The fun part of the post

Okay, so we've done the requisite introductions. The real part of this post has to do with a company called Despair, Inc. If you haven't heard of them, Despair sells "de-motivational" products that mock the traditional motivational posters you've no doubt seen in offices everywhere.

The company recently introduced some new products and one of them is related to my discussion above above. Take a look:

Although I suspect this relates more to individual performance, the point is the same when applied to tests--a consistent (reliable) test is good only if it's measuring what you intend it to.

I may need to get this one, although I already have three of their other posters hanging in my office. I use them to gauge sense of humor:

Tuesday, February 13, 2007

Employer branding made easy

There's been a lot of discussion over at ERE these days about employer branding, so it was nice to see IPMA-HR get into the swing of things in its most recent edition of IPMA-HR News (unfortunately not available to non-members but keep reading).

There were several good articles in this edition, including:

"Can HR Have a Brand Image, and If So, How Can One Determine HR's Current Reputation?"

"Attracting Talent to Government: Marketing the Mission"

Another article that really stood out was Mark Hornung's, titled "The Benefits of Employer Branding for Government Agencies: What Reglators May View as a Luxury Is a Necessity in Today's Tight Labor Market."

Mark did a great job summarizing the challenges organizations face when thinking about branding, how to assess your current brand, how to communicate it, and playing to your strengths. Targeted at the public sector, but applicable to any organization.

Although this publication isn't available to non-members (although if you join you get access to the newsletter back to November '04), Mark's employer, the Bernard Hodes Group, has some great information on their website, including a branding game (!) and an interview with Mark.


...By the way, feel free to let me know what you're interested in by responding to the survey posted on my main page. I had been running a similar survey using Sparklit but switched to PollDaddy after seeing how easily Joel Cheesman integrated it into his blog postings.

Monday, February 12, 2007

MSPB Releases Report on Practice of Merit

The U.S. Merit Systems Protection Board (MSPB) last week released a report that chronicles proceedings from a conference it held last year titled, "The Practice of Merit: A Symposium."

Contents of the proceedings are available by topic here, including a session devoted to hiring.

A report of the entire proceedings, including all comments, is available here.

Topics include:

- How the Department of Veterans Affairs recruits, assesses, and retains health care professionals and shies away from written tests

- How the State Department recruits and assesses Foreign Service professionals, including brand establishment and relying on test results over resumes

- How the Office of the Director of National Intelligence recruits, screens, and retains intelligence professionals

Each contributor emphasized the importance of establishing and maintaining relationships with educational institutions and maintaining a commitment to a merit-based system when given more flexibility in hiring.

Friday, February 09, 2007

Yahoo Pipes in Powerful New Search Tool

I don't normally post about things not totally related to personnel selection and recruiting. But I do make exceptions, particularly when I think it will become relevant.

One of the big stories yesterday was that Yahoo was forced to take down one of its websites because it received so much traffic--the day it was introduced. What was this new site? It's called Yahoo Pipes, and it's worth taking a look at even if you don't know your RSS from your URL.

In a nutshell, Pipes allows novices (okay, semi-novices) to aggregate/ mash-up/ recombine information from several sources and have all kinds of fun with it. Examples include a Pipe that takes content from the New York Times homepage and searches Flickr for related photos. Another allows the user to search for apartments within a certain range of a feature--such as a park, a grocery store, etc. Once you run the Pipe, you can subscribe to the feed results.

So how is this relevant for us? Several ways, I think. One possibility is taking job search results from multiple sites and allowing the job seeker to filter the results in a variety of different ways (location, duties, etc.) Another is for recruiters to source talent from several sources simultaneously, filtering for experience, competencies, etc. A third use is simply to keep up to date on a topic of particular interest to you or your organization, giving you a bit more control than the typical feed reader. This is just the tip of the iceberg.

It's not perfect--one problem I ran into is aggregating and searching jobs is limited because of the limited number of results that are presented on each results/RSS page. And although it's quite user friendly, it could be better--and likely will get there.

Google has been shelling out innovative tools left and right these days (e.g., Base, Co-op, Alerts, Docs). Yahoo must be doing quite a bit of preening right about now.

Thursday, February 08, 2007

Measuring quality of hire

I attended a pretty darn good little HCI webcast on January 30th presented by Taleo and Dell.

The title was, "Measuring quality of hire: You can't improve what you can't measure." Taleo gave an overview of why quality is important and how to improve it, Dell provided a case study of how they implemented a quality measurement system.


- Intangibles, such as the skills and abilities your workforce brings to the table, are an increasingly important aspect of firm value

- According to a survey, only about 20% of organizations define successful worker characteristics (i.e., perform a job analysis) prior to hire

- Measuring quality doesn't have to be difficult; for example, surveys can easily be distributed to hiring managers [e.g., through SurveyMonkey]

- Provides cost-of-mishire data for a variety of occupations, including Management, Computers, Legal, Protective Service, Construction, and Transportation [worth looking at just for this]

- Throughout the presentation, they present numerous ideas for metrics to use when measuring quality (e.g., time to fill, cost per hire, retention)

You can view the slides here.

The Conference Board reports online job ad statistics

The Conference Board has released an analysis of U.S. online job advertising trends for January of 2007 and it's got some really juicy numbers for us.


- There were 3,141,800 vacancies posted online, a drop of 6% from December of 2006 (primarily seasonal). This includes nearly 2 million jobs that were not posted in the previous month.

- California had the largest drop by far in posted vacancies--not surprising given that it is the state with the largest labor market and the highest overall number of job postings (532,000 in January).

- Over the entire year there was an increase of 12% in the number of postings.

- Maine and Oklahoma outpaced other states with the largest increase in posted vacancies over the last year (+68% and +50%, respectively).

-Four states -- Hawaii, Virginia, Delaware, and Utah -- had more advertisements than job seekers (!). Top cities in this category include Washington DC, Salt Lake City, and San Jose.

- Occupations in the highest demand were Management, Business and finance, Office and admin support, Computer and mathematical, and Healthcare. With the exception of Office jobs, these are also some of the highest paid jobs available.

- New York and Los Angeles had the highest number of posted vacancies.

This is just the tip of the iceberg--the report includes voluminous information breaking these numbers down. Good stuff.

Wednesday, February 07, 2007

Ninth circuit affirms class action status in Wal-Mart case

The U.S. Ninth Circuit has affirmed the district court's approval of class action status for plaintiffs in Dukes V. Wal-Mart, an employment discrimination lawsuit that claims the company discriminated against women in compensation and promotion opportunities in violation of Title VII.

Good summaries are available at EASI-HR Blog and Ross' Employment Law Memo.

Monday, February 05, 2007

Can grumpy workers lead to better organizational performance?

One of the most interesting findings in industrial/organizational psychology is that the relationship between job satisfaction and job performance isn't particularly strong (although we have people looking into it).

Now, new research from Dr. Jing Zhou at Rice University may help us understand why.

Dr. Zhou gathered survey responses from 161 employees (granted, not a large sample) of a large oil-field services company and their supervisors. What she found was that people who experienced periodic bad moods tended to be more focused on detail, more analytical, and more creative--partially because they're motivated to get out of their bad mood.

Overly happy people, on the other hand, aren't likely to see potential problems until it's too late.

It depends on the job

Bob Hogan, of Hogan Assessment Systems, rightly points out that the type of person you want depends on the job. If you're hiring in advertising or product development, you might look for someone who gets agitated when confronted with a problem. On the other hand, if you're hiring for a call center, grumpy shouldn't be high on your list--instead you want the bright cheerful person who can withstand a lot of negativity.

And on the organization

Zhou and Hogan also both point out that it's not enough to have the right person--the organization must support the expression of these types of emotions and encourage change. If a frustrated person is constantly squashed or told to cheer up, those innovative ideas may never bubble to the top. There's also a tie here to perceptions of organizational justice, of which being able to voice your opinion is an important aspect.

There's grumpy, then there's GRUMPY

Of course you don't want someone who's grumpy all the time. As Zhou points out, there's not much you're able to do with someone like that. Chances are that's not a good job-person fit (although it always warrants a little investigation).

In addition, there's a distinction between grumpy and angry--after all, we want people expressing themselves appropriately, not with office furniture.

More about Dr. Zhou's research can be found in this interview.

Friday, February 02, 2007

EEOC releases charge statistics for 2006

The EEOC has released statistics describing the types of charges filed and resolutions for FY 2006.


- A total of 75,768 charges were received, the first increase since 2002.

- Charges based on race, sex, and retaliation were most common, as in previous years.

- All charge categories were up except age (surprisingly) and equal pay.

- A record 15% of sexual harassment charges were filed by men.

- The EEOC had a record high "merit factor rate" of 22% (charges resolved with favorable outcomes for the plaintiffs).

- A record high 8,201 cases were resolved through mediation.

- $274M in monetary benefits was obtained (including litigation), down from $376M in 2005.

Johnson v. City of Memphis decision available

Thanks to Lance Seberhagen, the U.S. District Court decision in Johnson v. City of Memphis is now available in IPMAAC's library.

Why is this case important? Because it's a rare example of a plaintiff prevailing in a Title VII disparate impact case by arguing that (even though the exam process was judged to be valid), there existed another process with similar validity but with less adverse impact that the employer should have used, per the Albemarle decision, 42 U.S.C. 2000-e2(k)(1)(A) and Section 3B of the Uniform Guidelines.

This is an area where many folks think there will be increased litigation, which makes it that much more important that when we're designing selection systems we take a very broad view of what methods are available and their likely features (validity, adverse impact, practicality, etc.). I presented on this issue with some colleagues a few years ago.

If you want to skip to the meat of the case, read pages 16-27, including this gem:

"Plaintiffs are not required to have proposed the alternative. The requirement is only that the alternative was available. The Court reads "availability" in this context to mean that Defendant either knew or should have known that such an alternative existed."

If you'd like to cause your eyebrows to raise a little higher, read the remedies on pages 37-38.

No word yet on whether the City will appeal.

Thursday, February 01, 2007

5-point, 7-point, 9-point, oh my!

One of the most common questions that comes up in assessment is: What type of rating scale should I use? (Followed closely by: What is it exactly that you do again?)

Usually when people ask this question, they're putting together an interview and (bless their hearts) are taking the time to think about how to grade candidate responses. But it doesn't have to be for an interview--rating scales are used for other types of tests, most notably work sample/performance tests. And then there's that whole "performance evaluation" thing.

So what does the research say?

Well, that depends on who you ask. Let's take a look at what some folks much smarter than me have said:

Nunnally (1978): "[the increase in reliability] tends to level off at about 7, and after about 11 steps there is little gain in reliability..."

Landy & Farr (1980): "...the format should require the individual to use one of a limited number of response categories, probably less than nine..."

Cascio (1998): "...4- to 9-point scales [are statistically optimal]..."

Pulakos, in Whetzel and Wheaton (1997): "A generally accepted guideline is somewhere between four and nine."

So what do we take from this? That generally we should shoot for between 4 and 11 scale points.

Does it really matter?

Probably not. Some of the most comprehensive studies of the topic have determined that when all is said and done, the number of points probably doesn't make that big of a difference. For example:

Wigdor & Green (1991): "...the consensus of professional opinion is that variations in scale type and rating format do not have a consistent, demonstrable effect on halo, leniency, reliability, or other sources of error or bias in performance ratings..."

Landy & Farr (1980), again: "...about 4%-8% of the variance in ratings can be explained on the basis of format."

Guion (1998): "The 5-point scale is so widely used that is seems as if it had been ordained on tablets of stone...there is little evidence that the number of scale units matters much..."

I suspect that if the problem is with the rating format, in most cases it's not because there are too few/too many categories, but that the categories aren't anchored very well. Have you ever had to rate an answer with a scale like this: Excellent - Satisfactory - Poor ? What the heck does "Satisfactory" mean? That doesn't help the rater, doesn't lend itself to reliable and valid measurement, and certainly won't look good in court.

The other big problem is rater training. Some organizations do a great job of training raters. Many don't. Without extensive rater training, you're just asking for all kinds of errors to enter into the equation. Panel members should pre-test the interview, try to poke holes in it, and generally discuss.

Bottom line

Back in 1956 a little article was published that you may have heard of. It was titled, "The magical number seven, plus or minus two: Some limits on our capacity for processing information." In this article, George Miller argued forcefully that humans seem to have a natural limit of dealing with around 7 (+/- 2) pieces of information simultaneously.

51 years later, we don't seem to have changed our mind much.