Thursday, August 28, 2008

Real world value of strategic HRM practices

In my last post I talked about two articles from the most recent issue of Personnel Psychology (August, 2008) that had to do with adverse impact.

In today's post I'd like to talk about three of the other articles in that issue that all have to do with strategic human resource management (SHRM; not that SHRM) practices and their bottom-line impact. These studies don't directly reflect recruitment or selection practices but will interest anyone with a broader interest in HR.

The first study (by Birdi, et al.) compared several common SHRM practices (empowerment, training, and teamwork) with several more operational-style initiatives (TQM, JIT, AMT, and SCP) and looked at their effect on company productivity. The authors had access to a great database that included data from 308 companies over 22 years (!).

So what did they find? Out of the SHRM practices and the operational-style initiatives, only two--both SHRM practices--had significant effects on productivity. Specifically, the effects of employee empowerment and extensive training represented a gain of approximately 7% and 6%, respectively, in terms of value added per employee. Interestingly, it took both empowerment and training a couple years to impact productivity.

The second study by Nishii et al. looked at the attributions employees make about the reasons why management adopts certain HR practices and how these impact attitudes, organizational citizenship behaviors (OCBs) and customer satisfaction. Data from over 5,000 employees of a supermarket chain were analyzed.

So what did they find? A significant and positive correlation between employee attitudes and an attribution to the employer that the adoption of HR practices was based on a concern for customer service quality or employee well-being. In turn, employee attitudes were significantly (although less so) positively correlated with OCBs. Finally, the OCB of helping behavior was significantly correlated with customer satisfaction. In other words, when employees felt HR practices were implemented with an eye toward improving service quality or their own well-being, this improved their attitudes, which in turn increased the likelihood they would demonstrate helping behavior toward coworkers, which increased customer satisfaction. On the other hand, when employees attributed HR practices to keeping costs down, getting the most work out of employees, or complying with union requirements, there was no impact on employee attitudes.

The third study looked at how changes in team leadership impacted customer satisfaction in branches of a regional bank. Walker et al. examined data from 68 branch managers over a four-year period. The authors performed two tests--one of the mean differences between time periods and a residual analysis.

Why does that matter? Because the first type of test (called a t-test) simply looked at whether managers improved their team leadership scores and whether customer satisfaction ratings, on average, went up during that period. The answer to these questions was no. But the residual analysis looked at whether specific managers who improved (or worsened) their team leadership scores saw parallel improvement (or declines) in customer satisfaction ratings. The answer to THAT question was yes--in two of the three time periods (r=.21 and .31, respectively).

So what does all this mean? These studies certainly suggest that strategic HRM initiatives such as empowerment, communication, and extensive training (for both leaders and subordinates) can have significant, practical impacts on outcomes important to the organization.

Monday, August 25, 2008

Adverse impact on personality and work sample tests

The latest issue (Autumn, 2008) of Personnel Psychology has so much good stuff in it that I'm going to split it into two parts.

The first part, which I'll do today, focuses on the more selection-oriented articles which have to do with adverse impact on personality tests and in work sample exercises. In my next post I'll talk about three more articles that have to do with strategic HRM practices.

Today let's talk about adverse impact. It's a persistent dilemma, particularly given many employers' desire to promote diversity and the legal consequences of failing to avoid it. One of the "holy grails" of employee assessment is finding a tool that is generally valid, inexpensive to implement, and does not result in large amounts of adverse impact.

One type of instrument that has been suggested as fitting these criteria is the personality test. They're easy to administer and can be valid predictors of performance, but our knowledge of group differences has up until now been limited. In this issue of Personnel Psych, Foldes, Duehr, and Ones present meta-analytic evidence that attempts to fill in the blanks.

Their study of Big 5 personality factors and facets is based on over 700 effect sizes. So what did they find? There is definitely value to separating the factors from the facets, as they show different levels of group difference. And most of the group differences (in cases with decent sample sizes) were small to moderate. Here are some of the largest and most robust findings (e.g., 90% confidence interval does not include zero):

- Whites scored higher than Asians on even-temperedness (an aspect of emotional stability; d=.38)
- Hispanics scored higher than Whites on self-esteem (an aspect of emotional stability; d=.25)
- Blacks outscored Asians on global measures of emotional stability (d=.58)
- Blacks outscored Asians on global measures of extraversion (d=.41)
- Hispanics outscored Blacks on sociability (d=.30)

The article includes a very useful chart that summarizes the findings and includes indications of when adverse impact may occur given certain selection ratios. What I take away from all this is the classic racial discrimination situation employers are worried about in the U.S. (Whites scoring higher than another group) is less of a concern with personality tests than with, say, cognitive ability tests. But (and this is a big but), it doesn't take much group difference to result in adverse impact (see Sackett & Ellingson, 1997)

The second article is also about group differences. This time it's work sample tests and it's a meta-analysis of Black-White differences by Roth, Bobko, McFarland, and Buster.

The authors analyzed 40 effect sizes in their quest to dig further into this subject--and it's a good thing they did. A group difference (d) benchmark often cited for these exercises is .38 in favor of Whites. These authors obtained a value of .73, but with an important caveat--this value depends greatly on the particular work sample test.

For example, in-basket and technical exercises (e.g., reading a construction map) yielded d values of .74 .76, respectively. On the lower end, oral briefings and role-plays had d values of .22 and .21, respectively. Scheduling exercises were in the middle at d=.52.

Why the difference? The authors provide data that indicates the more saturated with cognitive ability/job knowledge the measure, the higher the d values. The more the exercise requires demonstrating social skills, the lower the d values.

Bottom line? Your choice of selection measure should always be based on the KSAs required per the job analysis. But given a choice between different exercises, consideration should be given to the group differences described above. Blindly selecting a work sample over, say, a cognitive ability test, may not yield the diversity dividends you anticipate (in addition to the fact that they may not be as predictive as we previously thought!).

Some important caveats should be noted about both of these pieces of research: (1) adverse impact is heavily dependent on factors other than group differences, such as applicant population, selection ratio, and stage in the selection process; and (2) from a legal perspective, adverse impact is only a problem if you don't have the validity evidence to back it up. Of course you should have this evidence anyway, because that's how you're deciding how to filter your candidates...right?

Friday, August 22, 2008

Tracking down the "Internet Applicant Rule"

When the OFCCP's Internet Applicant Recordkeeping Rule first came out it generated a lot of discussion.

You don't hear much about it now, even though it's one of the most important regulations that covered employers need to be concerned about when it comes to electronic recruiting. It specifies information that must be collected and retained about applicants and permissible screening criteria to filter down candidates.

Why the drop in popularity? Sure, it's not a new and sexy topic anymore. But another reason might be that the OFCCP doesn't make it easy to find information on the rule and they don't publicize it prominently anymore. There's no link to it on their homepage; the actual Federal Register rule is nowhere to be seen.

And the pretty-darn-helpful FAQs? Moved. Here. (granted it IS in the FAQ section)

Let's not forget about this particular regulation, as it impacts recruitment and selection (for those it applies to) just about as much as anything out there, including the Uniform Guidelines.

Wednesday, August 20, 2008

9th Circuit decision good news for employers

On August 7, 2008, the 9th Circuit Court of Appeals joined many other Circuits in deciding that in cases involving constitutional discriminatory hiring claims, the accrual period begins when candidates find out they aren't hired, or when a reasonable person would have realized this. The case is Zolotarev v. San Francisco.

Okay, so let's back up a second...what's a discrimination claim under the constitution? What we're talking about here are claims filed under Title 42 (Chapter 21) of the U.S. Code, such as Sections 1981 and 1983. These cases are typically brought against private sector employers (although as this case makes obvious, not always), and are sometimes combined with claims under other, more common, statutes, such as Title VII.

Why would someone want to bring a claim under these Sections? Several reasons:

- Unlike Title VII, ADA, or ADEA, there are no administrative requirements--in other words someone can file directly in court rather than going through, say, the EEOC

- Unlike discrimination cases brought under other laws, there are no caps to compensatory and punitive damages (of course no punitive damages are available from public sector entities)

- Also unlike cases brought under other statutes, there can be individual liability in these cases--specific hiring supervisors and HR staff can be held liable (of course this is pretty rare and most folks are indemnified, but still, having your name in a lawsuit isn't much fun)

So what's accrual
? The statute of limitations specifies how long plaintiffs have to file a suit. Accrual refers to when this period starts. So in California, where this case was filed, the statute of limitations for these types of cases is one year (reiterated in this decision). When that year starts is the take home from this case--according to the 9th Circuit, it starts when the plaintiffs found out they weren't hired, or when a reasonable person would have realized this. It does not start when they later suspect they were wronged.

This is in line with what many of the other Circuit courts have decided. So why is this good news for employers? Because it means these types of cases cannot be successfully brought longer than one year after candidates are informed they weren't chosen. Not only does this mean you can breathe a sigh of relief, it limits how long you need to retain your records (although you will want to check to see what other laws apply to you and how long their statute of limitations are).

Saturday, August 16, 2008

Can the Dewey Color System be used for hiring?


Recent articles in ERE (and elsewhere) have pointed out that CareerBuilder recently integrated the Dewey Color System into their site.

What is the Dewey Color System? As you might guess, it's a brief "test" where you choose your preference between various colors. It then provides you with a report that purports to describe your personality.

Pro? It's easy. It's much more easily digestible to most people than something like a traditional Big 5 test (and this is certainly not the first more "user friendly" personality test to emerge).

Con? We're far from being able to recommend this as a selection or hiring tool.

Some problems I have:

1) the entire basis of research support (according to their website) is this single article.

2) correlations with the Strong Interest Inventory reported in this article aren't terrible, but aren't outstanding either (median of .68).

3) correlations with the 16PF, which actually is used for hiring, were worse--median correlation was .51 with a range of .33-.68.

4) the results in this article are based on a single sample in a single location--no generalizability here.

5) to their credit, the authors of the article point out "what we have not yet established is that the Dewey Color System Test also predicts the behaviors for which these personality tests are typically used. Thus, more extensive validation should consider using color preferences directly to predict variables such as job satisfaction, leadership potential, etc."

6) beware any testing instrument that is described as "valid" or "validated." Tests aren't validated. Interpretations of them are. Read the Principles, folks (if you must, skip to page 4).

Is it easy? Yup. Might there be something to this? Yup. Is this another example of the P.T. Barnum effect? Yup. Should we be very careful and conduct good research before using personality tests? Yup.

Are we at a point where we can say this should be used for personnel selection? Nope.

p.s. speaking of personality tests, did you know Hogan Assessment Systems has their own blog? I didn't until now--check it out.

Wednesday, August 13, 2008

Speed v. power

Part of my job is constantly trying to figure out how to communicate better with our customers (hiring supervisors). Discussions about validity and reliability may interest me, but it's a guaranteed recipe for blank stares from most people. So we think of other ways to talk about the pros and cons of tests.

This document is one attempt at communicating assessment research in layperson terms. It graphs power (validity) on the Y-axis and speed of administration on the X-axis. We could easily have chosen other criteria, such as adverse impact or applicant acceptance, but we felt when you get right down to it, these are the factors customers care about most.

So what do you see when you look at this graph? Do you think it communicates what we should be communicating? Have we over- or under-stated the case on any of the methods? Does this detract from basing the decision on job analysis?

Wednesday, August 06, 2008

Resumes? Applications? Or something in between?

A recent item in a newsletter published by ESR issued a recurring recommendation in HR: employers should use standard applications, not resumes. I'd like to take the opposite viewpoint. Well, not opposite, but, well...you'll see.

The newsletter contains many good reasons for requiring a standard application. For example, applicants often provide you with information you may not want (e.g., membership in advocacy organizations). Applicants also use the most positive spin possible, (over)emphasizing accomplishments and leaving employment gaps. In addition, applicants may not give you all of the information you require, such as dates and salaries.

These are good reasons for requiring a standard application over a resume. But let me play devil's advocate for a minute. Think about the modern candidate experience. In order to apply for a job, you have to oftentimes spend hours--days--searching through job boards and employer "career portals." If you're lucky enough to find a job that appears to be what you want (because of course employers' worst kept secret is they don't tell you the bad parts of a job), you have to complete a lengthy application (each time), or navigate your way through a "modern" applicant tracking system (read: GUI designed by IT).

Qualified candidates--who are hard to find in the first place--get fed up. They don't want to waste their time filling out applications or entering information into your ATS. They may just look for an opportunity that doesn't require them to describe their entire life experience. Hence the resume, which they already have on file and simply requires a quick update.

So how do we reconcile the needs of the employer, who are doing their best to make sure they get the information they need, and the employee, who is trying to efficiently provide the information? I see several solutions:

1) The employer accepts resumes but makes very clear what the resume should contain. No unexplained employment gaps. Salary must be included. Etc.

2) Employers and candidates take advantage of a standardized third-party site that many folks already use for networking purposes (e.g., LinkedIn), again making clear what the profile must contain.

3) Employers use an ATS that takes less than 10 minutes for an applicant to apply.

Or how about a combination? How about giving the candidate options. The candidate must "register" with the employer's ATS but all this takes in an email address. Then the candidate can either:

a) upload their resume (which must include all the information the employer needs)

or

b) route the employer to their on-line profile--which must exist on a prescribed set of sites (e.g., no MySpace pages).

These are just some (not particularly creative) ideas. I'm sure somebody out there has even better ones. But isn't it about time we figure out how to meet both candidate and employer needs when it comes to applying?