Thursday, May 27, 2010

Lewis case emphasizes need for valid tests


On Monday, May 24, the U.S. Supreme Court ruled in Lewis v. City of Chicago that plaintiffs filing an adverse impact discrimination claim under Title VII of the Civil Rights Act have 300 days to file from each time the test is used to fill a position--not just 300 days from when the test was administered.

For the most part, this mainly impacts employers that run an exam and use the results to fill positions for several years. In this case it was a firefighter entry-level exam, and my guess is it will mostly be public sector agencies and large employers that should pay particular attention to the ruling.

Why? Well for one it means more potential adverse impact lawsuits. If you were counting on being safe after 300 days from the exam, that is no longer the case. Second, it emphasizes the need to follow professional guidelines when developing an exam. Employers can successfully defend against an adverse impact case by showing that the selection practice is "job related for the position in question and consistent with business necessity..." (and that no alternatives with similar validity with less adverse impact were available). This means your exams need to be developed and interpreted by people who know what they're doing.

The City claimed that evidence related to an employer's business necessity defense might be unavailable by the time the lawsuit is brought--the court was not swayed. This means you'll want to hang on to your exam development records for at least a year beyond the last time you use the results to fill a position.

Another point worth noting: this case boils down to the validity of a cut score used by the City, which they themselves admitted wasn't supportable. Proper exam development and interpretation includes setting a job-related pass point based on subject matter input and/or statistical evidence that it is linked to job performance.

The Lewis ruling doesn't fundamentally change what we should be doing. It just emphasizes that we need to do it right.

Click here for a quick overview of the facts of the case.

Sunday, May 23, 2010

June 2010 IJSA

The summer journal season continues with the June 2010 issue of the International Journal of Selection and Assessment. Take a deep breath, there's a lot of stuff packed into this issue:

- Roth et al. provide evidence that women outperformed men on work sample exams that involved social skills, writing skills, or a broad array of KSAs. To the extent that an employer is trying to avoid discriminating against female applicants, this provides support for work sample usage.

- In a study of managers in Taiwan, Tsai et al. show that the most effective way an applicant can make up for a slip in an interview is to apologize (vs. attempting to justify or use an excuse).

- Jackson et al. strive to add some clarity on task-based assessment centers

- Blickle & Schnitzler provide evidence of the construct and criterion-related validity of the political skill inventory

- Colarelli et al. studied how racial prototypicality and affirmative action policies impact hiring decisions. Results of a resume review indicated more jobs were awarded to black candidates as racial prototypicality and affirmative action policy strength increased, but stronger AA policies decreased the percentage of minority hires attributed to higher qualifications.

- In my personal favorite article of the issue, Karl et al. found in a study of U.S. and German students that those low on conscientiousness (especially), agreeableness, and emotional stability were more likely to post "Facebook Faux Pas". This provides some support for employers who screen out applicants based on inappropriate social networking posts. I'll talk more about this in my upcoming webinar.

- Denis, et al. provide support for the NEO PI-R's ability to predict job performance in two French-Canadian samples.

- BilgiƧ and Acarlar report results of a study of Turkish students and perceptions of various selection instruments. Interviews were rated most highly and there were some differences in terms of privacy perceptions depending on the goal orientation of the student.

- Trying to figure out how to hire better direct support professionals (e.g., those providing long-term residential care or care to those with disabilities)? Robson, et al. describe the development of a composite predictor composed of various measures (e.g., agreeableness, numerical ability) that predicted performance, satisfaction, and turnover.

-
Ahmetoglu et al. provide support for using the Fundamental Interpersonal Relationship Orientations-Behaviour (FIRO-B) to predict leadership capability.

- Ispas et al. describe results of a study that showed support for a nonverbal cognitive ability measure (the GAMA) in predicting job performance in two samples.

- Last but not least, in another win for context-specific assessments, Pace & Brannick show how a measure of openness to experience tailored to specific work outpredicted the comparable general NEO PI-R scale. IMHO this is how personality measures will eventually become more prominent and accepted as pre-hire assessments.

Friday, May 21, 2010

IPAC conference to feature Campbell, McDaniel, Highhouse, and more

Those of you on the fence about attending the 2010 International Personnel Assessment Council (IPAC) conference on July 18-21 may be interested to know that a preliminary schedule has been released that reveals some great speakers and topics. For example:

- David Campbell's provocatively titled opening session, The Use of Picture Postcards for Exploring Diversity Issues Such as Bias and Prejudices, or "How Can We Keep Our Grandchildren From Going to War With Each Other?"

- Not to be outdone, Michael McDaniel kicks things off Tuesday morning with Abolish the Uniform Guidelines.

- Scott Highhouse closes things up Wednesday with A Critical Look at Holistic Assessment

- Great pre-conference workshops on everything from job analysis to fairness

- Wonderfully diverse concurrent sessions on topics such as public service motivation, leadership coaching, simulations, engagement, online testing, charging for exams, test transportability, cross-cultural personality assessment, measuring workforce gaps, adverse impact analysis, faking and lie detection, and succession planning. And that's just a sample!

Staying current on assessment through professional education is one of the commandments of our field. I hope you'll be joining your friends and colleagues in Newport Beach. Early bird registration ends June 1st.

Friday, May 14, 2010

Personnel Psychology, Summer 2010


The latest issue of Personnel Psychology (v63, #2) marks the beginning of summer journal season. Let's take a peek at some of what's inside:

Practice makes...better. John Hausknecht studied over 15,000 candidates who applied for supervisory positions (and 357 who repeated the process) over a 4-year period with a large organization in the service industry. The selection process included a personality test. He found that candidates that failed the first time around showed practice effects on dimension-level scores of .40 to .60. Candidates that passed the first time, but were taking the test again for other reasons, generally showed no difference in scores. More interestingly, on several subscales low scores the first time around were associated with practice effects that exceeded one standard deviation. A good reminder that personality inventories are susceptible to "faking", but certainly not a nail in their coffin as they still work quite well in many situations.

Another reason to structure your interviews.
As if you needed more convincing, McCarthy et al.'s study of nearly 20,000 applicants for a managerial-level position in a large organization found that the use of a structured interview resulted in zero main effects for applicant gender and race on interview performance. Similarly, there were no effects of applicant-interviewer similarity with respect to gender and race.

Users of the CRT-A take note. The conditional reasoning test of aggression (CRT-A) is used to detect individuals with a propensity for aggression. Previous studies have suggested the criterion-related validity of this test is around r=.44. In this study, by Berry et al., the authors meta-analyzed a large data set and found much lower values, in the .10-.16 range, that rose to .24-.26 when certain studies were excluded.

Assess your way into a job. Last but not least, Wanberg et al. describe the development of an inventory for job seekers called Getting Ready for your Next Job (YNJ, available here). The authors present results tying inventory components (e.g., job search intensity, Internet use) to subsequent employment outcomes.

Stay tuned, new issues of JAP, IJSA, and others should be out soon!

Wednesday, May 05, 2010

Grade your co-workers.com?


It was only a matter of time.

We know people like judging other people. Heck, we even like watching other people judge people. We also know people like interacting with websites rather than simply reading them. The natural result? Websites where you can judge other people.

Collective judgments are nothing new when it comes to restaurants, books, or even employers. But we're entering a new era where your online reputation may in large part be determined by other people. Assuming this trend takes off.

On one end of this spectrum, we have websites like Checkster, which is more of a reference checking or 360-degree feedback tool. It's what I would consider a "closed" system in its current iteration, because unless you're part of the process (applicant, employer, or reference-giver) you don't interact with it. The information remains relatively private and it's for a specific situation.

In the middle are sites liked LinkedIn, which allow you to "recommend" people. LinkedIn is a bit more open in that you can view someone's profile, but to see someone's recommendations, you need to be connected to them in some way, which is generally tricky for an employer unless they're already very connected. The other problem is the title--recommendations. This precludes other types of, shall we say, more constructive feedback.

On the far end of the spectrum is Unvarnished, which has recently gotten a lot of press. It's the most open system in that people's profiles are readily available (presumably; it's still in beta). Any unvarnished user can add a profile of a person to the site or comment on an already existing one. And it's all anonymous, although reviews can be rated and moderated. Finally, you can "claim" your profile and receive notification of new reviews, comment on ratings, and request reviews from specific people.

One of the big questions about this model is how accurate the information is. Are people just using this as an opportunity to get back at someone? Do they really know the person? To some these concerns are so overwhelming that they can't imagine using such a site. So it might be helpful for us to look at some recent research on a similar site, RateMyProfessors, which shares the open feel of Unvarnished.

You're probably familiar with RateMyProfessors. It's a simple way for students to provide feedback about their teachers. Teachers are rated on things like helpfulness and clarity and can provide comments as well. Those being rated can even provide responses.

Sounds like a way for failing students to rant about their professors, right? Well you might be surprised. In a new study published online, the authors looked at several hundred students and professors at the University of Wisconsin-Eau Claire. Here are some of their results:

1) Ratings were more frequently positive than negative.

2) "Popularity" (or lack thereof) of teacher was not correlated with frequency of feedback.

3) Students are not using the site to "rant" or "rave" as their primary motivation.

4) Those who posted were no different than those who hadn't in terms of GPA, year in school, or learning goal orientation. They were more likely to be male and their program was correlated with likelihood of feedback (e.g., those in the social sciences were more likely than those in the arts and humanities).

These results, if generalizable to other similar sites like Unvarnished, suggest that the results may be more accurate than we fear, and thus more useful. We know that peer reviews have at least moderate validity in terms of predicting performance. But there a still a lot of questions to be answered in terms of how the feedback is structured and how the information will be used by a potential employer.

So...might there be hope for crowdsourcing one's reputation? Or are we headed down a dangerous road? Would this make employers' lives easier--or just more confusing? Are defamation suits a possibility?

As an applicant yourself, here's something else to think about: would you rather your online reputation be determined by what an employer finds out about you while randomly surfing, or would you rather have a site where you can--at least partially--manage it?

Finally, consider this: If such a website became popular and filled with information about applicants...would you look someone up before hiring them?

Saturday, May 01, 2010

Webinar on Internet snooping


Okay, that's not the title. The real title is actually much lengthier but more accurate: "They posted what? Promises and pitfalls of using social networking and other Internet sites to research job candidates." Yours truly will be presenting this webinar--IPAC's first--on June 9th.

I believe Internet snooping is one of the elephants in the room when it comes to personnel selection--most people are doing it, but we don't talk about it. The way to deal with this is to get things out in the open and provide hiring supervisors with some informed guidance rather than pretend they're not doing it or deluding ourselves into thinking that blocking these websites at work takes care of it.

I'll be mainly focusing on two points: (1) why websites like Facebook and LinkedIn hold so much promise when it comes to gathering additional data on candidates, and (2) what the drawbacks are if you're going to do this. The latter includes things like ensuring authenticity, uncovering information you wish you hadn't, and finding the information in the first place.

It's free for IPAC members and $75 for non-members that includes a membership for the rest of the year. More details are here. Hope to "see" you there!