Tuesday, December 28, 2010

Relax, researchers


Researchers make a lot of choices: What to study. How to measure. What statistic(s) to use. Researchers conducting meta-analyses (essentially a study of studies) face similar choices. So it's nice to know that any agonizing meta-analysts go through may be largely unimportant in terms of the results.

In the January 2011 issue of the Journal of Management, Aguinis et al. report the results of their study of 196 meta-analyses published in places like Journal of Applied Psychology, Academy of Management Journal, and Personnel Psychology. Specifically, they looked at the impact that various methodological choices had on the overall effect sizes. Things like:

- Should we eliminate studies from the database?
- Should we include a list of the primary-level studies used in the meta-analysis?
- What meta-analytic procedure should we use?

Here, in their words, were the results:

"Our results, based on 5,581 effect sizes reported in 196 published meta-analyses, show that the 21 methodological choices and judgment calls we investigated do not explain a substantial amount of variance in the meta-analytically derived effect sizes. In fact, the mean effect size...for the impact of methodological choices and judgment calls on the magnitude of the meta-analytically derived effect sizes is only .007. The median [effect size] value is an even smaller value of .004."

So not only does this suggest researchers should spend less time worrying about methodological choices, it raises a question about the value of including too much of this history in published articles--if it doesn't make any difference, do I really need to read about why you chose to eliminate studies due to independence issues or missing information?

The article has another, perhaps even more interesting, finding. And it's something that us personnel research nerds like to argue over: corrections. Things like correcting for range restriction (e.g., because you only hire people that get the top scores) and criterion unreliability (e.g, because supervisor performance ratings are less than perfect). For every "corrections were necessary to understand the conceptual-level relationships" you'll get a "that's great, but in the real world the data's the data."

So that's why it's fairly amusing to note the differences these types of corrections tended to make. Again, in the authors' own words:

"across 138 meta-analyses reporting approximately 3,000 effect sizes, the use of corrections improves the resulting meta-analytically derived effect size by about .05 correlation coefficient units (i.e., from about .21 to about .25). Stated differently, implementing corrections for statistical and methodological artifacts improves our knowledge of variance in outcomes by about only 25%."

So are all going to stop arguing and hold hands? Not likely. For a variety of reasons, including the fact that opinions are hard to change, and social science researchers are not immune. You could also argue that in some cases an increase from .21 to .25 has significant implications--and to be sure that's true. But I agree with the authors that the number of cases where this greatly increases the practical usefulness of a theory is small.

Does this mean we should forget about refining our results to make them more accurate? Absolutely not. It just means that our current approaches may not warrant a lot of hand-wringing. It also means we should focus on high-quality primary studies to avoid the garbage-in, garbage-out problem.

So let's take a deep breath, congratulate ourselves for a productive year, and look forward to 2011, which I'm sure will bring with it all kinds of exciting and provoking developments. Thanks for reading.

By the way, you can access the article (at least right now) here.

Sunday, December 12, 2010

Haste makes waste


"Here you go, way too fast
don't slow down you're gonna crash
you should watch -- watch your stay here
don't look out you're gonna break you neck"
- The Primitives

There are many pieces of advice I could give the average employer when it comes to recruiting and hiring the right way. Make your job ads more attractive and concise. Use realistic job preview technology. Conduct thorough job analyses. Reduce your over-reliance on interviews.

But I'd be hard pressed to come up with a more important piece of advice than this: SLOW DOWN.

Too often organizations rush through the recruitment and selection process, relying on past practice and not giving it the attention it deserves. The result is often poor applicant pools and disappointing final selection choices.

Here are some classic warning signs things are going the wrong way:

"We don't have time to re-do the advertisement"
"Let's just do interviews like we did last time"
"The questions we used last time will be fine"

When you hear these types of comments, your reaction should be: "Because they worked so well last time?" Okay, maybe that's what you think. What you say is: "What information do we have about their prior success?" Hiring decisions are too important to be left to hunches and cognitive laziness--we all know this. Yet it's surprising how often folks fail to put in the effort they should.

Why do people fall into this trap? Mostly because it's easier that way (although naivete and lack of organizational processes play a role). Decision makers naturally gravitate toward the path of least resistance (we do love our heuristics), and it takes resilience to put in the effort each time. But it's not just because humans are lazy. It's because we're busy, and because other factors tend to overshadow sound selection--like organizational politics or feeling that someone is "owed" the job.

Decision making as a field of study tends to be overlooked when it comes to hiring, and that's a shame (ever heard of escalation of commitment?). Fortunately there is a large body of research we can learn from, and this cross-fertilization is the subject of the first focal article by Dalal, et al. in the December 2010 issue of Industrial and Organizational Psychology. They point out that the field of I/O psychology is hurting itself by not taking advantage of the theories, methods, and findings from the field of judgment and decision-making.

One of their main recommendations is to make "a concerted effort to consider the benefits of adopting courses of action other than the favored one." This could mean things like having devils advocates and group decision making automatically part of your hiring plan. It could even be as simple as a checklist that hiring supervisors fill out to ensure they're not rushing it. Or--heaven forbid--we could hold supervisors accountable for the quality of their hiring decisions.

There are several interesting commentaries following the article, making many different points. One of my favorites is from Kristine Kuhn, who I'll now quote from liberally:

"...evidence-based recommendations to use statistical models to select employees rather than more holistic, subjective assessments meet substantial resistance. The ambiguous criterion of "fit" is advanced by many experienced practitioners as a reason for not relying solely on validated predictive indices."

"Despite considerable evidence that typical interviews do not add predictive validity, managers often resist attempts to impose even minimal structure." (Consider this the next time you follow-up a structured interview with a more casual unstructured one)

"Some managers may be receptive to training and even willing to implement structural changes in selection procedures. But this will only be the case if the primary goal is in fact to hire the people most to likely to perform well and not those with whom they will be most comfortable interacting."

Amen.

As for me, I'm going to make a New Years resolution to take more deep breaths and slow down. Most involved in hiring would do well to do the same.

I should point out there is another focal article in this issue by Drasgow, et al. that you methodology folks will like. In it, the authors argue that rating scale methods derived from Likert's approach (the 5-point response scale) are inferior to ones that have evolved from the (older) ideas of Thurstone.

In a nutshell, the authors describe how the latter approach focuses on an "ideal point" that describes an individual's standing on a particular trait. It involves, as part of the rating scale design, asking people to provide ratings that might seem unnaturally forced or incongruous (do you like waffles or Toyotas?). But the authors argue strongly that this approach offers tangible improvements for things like personality inventories.

The commentators are...shall we say...skeptical. But the back-and-forth makes for some interesting reading if this is your cup of tea.

Saturday, December 04, 2010

More on personality: Empathy and genetic links

The research on personality inventories continues unabated with two new studies.

The first, by Taylor et al. in the December 2010 JOOP, found that empathy plays an important role in explaining the relationship between Big 5 traits and organizational citizenship behaviors.

The second, by McCrae, et al. in the December 2010 Journal of Personality and Social Psychology, was an attempt to better explain the genetic underpinnings of personality. The effects found in this study were small but significant, suggesting further research is needed to better understand this relationship.

Speaking of personality, don't miss Bob Hogan's most recent post to his blog; it features a wonderfully simply explanation of the value of even "small" correlations between assessment instruments and job performance.