Sunday, July 26, 2009

July 2009 J.A.P.: SJTs and more


Situational judgment tests (SJTs) have a long tradition of successfully being used in employment tests. These types of (typically multiple-choice) items describe a job-related scenario then ask the test-taker to endorse the proper response. The question itself usually takes one of two forms:

1) What SHOULD be done in this situation? ("knowledge instruction")

2) What WOULD you do in this situation? ("behavioral tendency instruction")

What are the practical differences between the two? Previous meta-analytic research, specifically McDaniel et al.'s 2007 study, revealed that knowledge instruction items tend to be more highly correlated with cognitive ability, while behavioral tendency items show higher correlations with personality constructs. In terms of criterion-related validity, there appeared to be no significant difference between the two.

But there were limitations to that study, and two of them are addressed in a study found in the July 2009 issue of the Journal of Applied Psychology. Specifically, Lievens et al. addressed the inconsistency in stem content by keeping it the same while altering the response instruction, and also looked at a large population of applicants, rather than incumbents, which tended to dominate McDaniel et al.'s 2007 sample.

Results? Consistent with the 2007 study, knowledge instructions were again more highly correlated with cognitive ability, and there was no meaningful difference in criterion-related validity (the criterion being grades in interpersonally-oriented courses in medical school). Contrary to some research in low-stakes settings, there were no mean score difference between the two response instructions.

Practical implications? The authors suggest knowledge instruction items may be superior due to their resistance to faking. My only concern is that these items are likely to result in adverse impact in many applied settings. Like all assessment situations, the decision will involve a variety of factors, including the KSAs required on the job, the size and nature of the applicant pool, the legal environment, etc. But at least this type of research supports the fact that both response instructions seem to WORK. By the way, you can see an in-press version of this article here.

Other content in this journal? There's quite a bit, but here's a sample:

Content validity <> criterion-related validity

More evidence that selection procedures can impact unit as well as organizational performance

Self-ratings appear to be culturally bound

Tuesday, July 21, 2009

Ricci webcast on August 12

The Ricci v. DeStefano decision continues to generate a lot of interest. To help sort it all out, the Personnel Testing Council of Metropolitan Washington, D.C. (PTC-MW) will host Dr. James Outtz, renowned I/O psychologist and co-author of an amicus brief in the case, on August 12.

Not in D.C.? Not a problem. The luncheon presentation will be webcast at an extremely low price. Check out the website for details.

Coincidentally, a much less well known individual (yours truly) will also be presenting on the Ricci decision at PTC-Northern California (PTC-NC) at their August 13th luncheon.

By the way, check out some great commentary about the decision by several SIOP members here. I find it fascinating that SIOP came out strongly against the validity of the exam, to which the majority of the Supreme Court responded, "yawn."

Sunday, July 12, 2009

HR, comic book style


You've always suspected that HR would make a great comic (graphic novel), right? Well turns out you're right.

Check out Super Human Resources

(first issue here)

(video preview is also here)

Tuesday, July 07, 2009

How can we improve executive selection?


Many of us would agree in the wake of recent financial meltdowns that much of the problem stemmed from poor decision making--presumably from the top down. We know a lot about how to select the right people, yet our best estimates peg leadership failures at around 50%. Are there ways we can use I/O expertise to improve this statistic?

This is the topic of the first focal article in the June 2009 issue of Industrial and Organizational Psychology, written by George Hollenbeck.

The author makes several excellent points, among them:

- The process of selecting executives is significantly dissimilar from how we select, say, entry-level hires. The decisions tend to be based more on "character"--essentially personality aspects with a little morality tossed in--more than standardized testing of competencies.

- I/O psychologists are rarely brought into the executive selection process, in large part because they don't "get" how selection decisions at this level are made. We tend to have an assessment or behavioral bent, whereas these decisions more often are holistic and highly subjective.

The author argues that we need to change our mindset to match more closely that of executives--we need to focus on character rather than competencies. The authors that provide subsequent commentaries agree that the focus on executive selection is timely, but some question the focus on character and others point out that predicting performance at this level is incredibly difficult given all of the environmental factors.

Yet after all this, I can't help but wonder (as do some of the commentary authors)...is it selection professionals that need to change their mindset, or should how we select executives look more like how we select entry-level hires? Maybe we'd all benefit from largely taking the judgment component out and relying more on standardized methods such as ability tests. But is that realistic? Are people at the top willing to admit that their judgment may be inferior to standardized tests?

How can we marry assessment expertise with the political and organizational realities inherent in executive selection? My bet is it lies with establishing quality relationships with the high-level decision makers. Become a trusted adviser, demonstrate the bottom-line value of sound assessment, and be flexible about applying our best practices. This is the kind of partnership that works with first-line supervisors; there's a good chance it will work all the way up the chain.

Wednesday, July 01, 2009

Ricci case: Full of sound and fury...


There's been a lot of hoopla over the last several days over the U.S. Supreme Court's decision in Ricci v. DeStefano. It's been described as a win for "reverse discrimination" cases, a rebuke of written tests, and judicial activism. The way I read it, the decision is completely unsurprising and will likely change absolutely nothing about employment testing.

For anyone who isn't familiar with the case, here's a very brief rundown: the City of New Haven, CT gave promotional tests for Lieutenant and Captain firefighter positions using written multiple choice tests and interviews. When they crunched the results it turned out--not surprisingly--that there was statistical evidence of adverse impact against the Black candidates. The City decided not to use the list, and the White and Hispanic candidates sued, claiming disparate treatment. The Supreme Court ruled in their favor.

A little unusual of a case in terms of who's on what side, and there's a lot of good reading in the decision for anyone wanting to know more about test validation. But the decision itself is totally consistent with three main themes from previous decisions:

(1) There really isn't "reverse discrimination"--there's just discrimination based on a protected classification, such as race, color, or sex. Majority groups are protected just like minority groups.

(2) Employers do not have to go to irrational lengths to validate their selection methods. Although the tests had flaws, the court continued to demonstrate that employers simply need to follow a logical process for developing the exam to show job relatedness; the exams don't have to win any awards.

(3) Disparate treatment by a government entity in order to avoid liability for adverse impact is legal only in certain very specific instances (when there is a "strong basis in evidence"). The court has been trending for years toward "color-blind" selection decisions.

About the only thing this case really points out is employers need to be ready to use the results from whatever test they administer, barring some enormous irregularities. That, and part of a defense against an adverse impact case might be that choosing not to use the exam would have been evidence of disparate treatment (I'll grant you that one's a little confusing).

All in all--and I'm certainly not the only one who feels this way--it doesn't appear to be anything to get excited about.

Want to know more? Check out the scotuswiki page.