Saturday, September 24, 2011

Research update + happy anniversary...to me!

Two things this time: we've got a lot of research to go over, and then a bit of a celebration! First the research.

The September issue of the Journal of Applied Psychology is out. Let's see what it has to offer:

- Using performance ratings as an assessment or a criteria? You'll want to look at Ng, et al.'s study of leniency and halo errors among superiors, peers, and subordinates of a sample of military officers.

- Speaking of criteria, you may be interested in Northcraft et al.'s study of how characteristics of the feedback environment influence resource use among competing tasks. Interesting stuff.

- Okay, let's turn to something more traditional. Berry, et al. look at correlations between cognitive ability tests and performance among different ethnic groups. Not surprising to those of you familiar with the research, the largest difference found was between White and Black samples.

- Another traditional (but always interesting) topic: designing Pareto-optimal selection systems when applicants belong to a mixture of populations. Check out De Corte, et al.'s piece. Oh, you might be interested in the in-press version.

- Dr. Lievens (a co-author on the previous study) has been busy. He and Fiona Patterson collaborate on a study of the incremental validity of simulations, both low fidelity (SJTs in this case) and high fidelity (assessment centers), beyond knowledge tests. Yes, both had incremental validity, and interestingly ACs showed incremental validity beyond SJTs. Check out the in press version as well.

- Wondering whether re-testing degrades criterion-related validity or impacts group differences? You're in luck because Van Iddekinge, et al. present the results of a study of just that. Short version? Re-testing actually did a lot of good.

- I know what you're thinking: "Might Lancaster's mid-P correction to Fischer's exact test improve adverse impact analysis?" Check out Biddle & Morris' study for an answer.

- And now that you've had your fill of that statistical analysis, you find your mind wandering to effect size indices for analyzing measurement equivalence. I'm right there with ya. So are Nye & Drasgow.


Let's turn now to the October issue of the Journal of Personality and Social Psychology because there are a few articles I think might interest you...

- First, Lara Kammrath with a fascinating study of how people's understanding of trait behaviors influence their anticipation of how others will react.

- Speaking of fascinating, George, et al. present the results of a 50-year longitudinal study of personality traits predicting career behavior and success among women. It makes you realize again how much has changed since the 1960s!

- I tell ya, this issue is chalk full of goodness. Carlson et al. demonstrate that people can make valid distinctions between how they see themselves and how others see them--potentially informing the debate on personality inventories.

- Lastly, a piece by Specht et al. on how personality changes over the life span and why this might be. Fascinating implications for using personality inventories for selection.

Bonus article: remember how I mentioned using performance ratings above? Well you might be interested in an article by Lynn & Sturman in the most recent Journal of Applied Social Psychology where they found that restaurant customers sometimes rated the performance of same-race servers as better than those of different races--but it depended on the criterion.



FINALLY, I'm proud to announce that this blog has officially been going strong for five years. My first post (now incredibly hard to read) was in September of 2006. Back then the only other similar blog was Jamie Madigan's great (but now sadly defunct) blog, Selection Matters. My first email subscriber (from DDI if you're curious) came on a month later. Now I have almost 150 email subscribers and at least a couple hundred more who follow the feed. Around 3,000 individuals visit the site each month from over a hundred countries/territories (U.S., India, and Canada are 1-2-3). It's a labor of love and I thank you for reading!

Saturday, September 10, 2011

One last time: It's all subjective


While reading news about a court decision recently I was struck again by how the U.S. Court system continues to make a false and largely unhelpful distinction between "objective" and "subjective" assessment processes, and they're certainly not the only ones. Presumably this is meant to highlight how some are based on judgment while others are free from it.

One last time: it's all subjective.

I challenge you to come up with a single aspect of the personnel assessment process that is not based on judgment. Not only that, "degree of judgment" is not a useful metric in determining the validity of a selection process--for many reasons including but not limited to whose judgment is being used.

Here is a sample of assessment components that are based on judgment:

- how to conceptualize the job(s) to be filled

- how to study the job(s)

- how to recruit for the job(s)

- which subject matter experts to use

- how to select the KSAs to measure

- how to measure those KSAs

- how items are created or products are selected

- how to set a pass point, if used

- how to administer the assessments

- how to score the assessments

- how to combine assessments

- how to make a final selection decision

- what type of feedback to give the candidates

And this is just the tip of the iceberg. The entire process is made up, like all complex decisions, of smaller decisions that can impact the selection process (what software do I use to analyze the data? Is this one KSA or two?).

So what WOULD be helpful in describing how an assessment process is developed and administered? I can think of a few yardsticks:

1. Extent to which processes and decisions are based on evidence. To what extent is the assessment and selection process based on methods that have been shown scientifically to have significant utility?

2. Degree of structure. To what extent are the assessment methods pre-determined? How flexible are the processes during the assessment?

3. Multi-dimensionality. How many KSAs are being measured? Is it sufficient to predict performance?

4. Measurement method. How many assessment methods are being used? Do they make sense given the targeted KSAs?

5. Transparency. Is the entire process understandable and documented?

I'm not inventing the wheel here. I'm not even reinventing it. I'm pointing out how most cars have four of them. It's obvious stuff. But it's amazing to me that some continue to perpetuate artificial distinctions that fail to point out the truly important differences between sound selection and junk.