Researchers make a lot of choices: What to study. How to measure. What statistic(s) to use. Researchers conducting meta-analyses (essentially a study of studies) face similar choices. So it's nice to know that any agonizing meta-analysts go through may be largely unimportant in terms of the results.
In the January 2011 issue of the Journal of Management, Aguinis et al. report the results of their study of 196 meta-analyses published in places like Journal of Applied Psychology, Academy of Management Journal, and Personnel Psychology. Specifically, they looked at the impact that various methodological choices had on the overall effect sizes. Things like:
- Should we eliminate studies from the database?
- Should we include a list of the primary-level studies used in the meta-analysis?
- What meta-analytic procedure should we use?
Here, in their words, were the results:
"Our results, based on 5,581 effect sizes reported in 196 published meta-analyses, show that the 21 methodological choices and judgment calls we investigated do not explain a substantial amount of variance in the meta-analytically derived effect sizes. In fact, the mean effect size...for the impact of methodological choices and judgment calls on the magnitude of the meta-analytically derived effect sizes is only .007. The median [effect size] value is an even smaller value of .004."
So not only does this suggest researchers should spend less time worrying about methodological choices, it raises a question about the value of including too much of this history in published articles--if it doesn't make any difference, do I really need to read about why you chose to eliminate studies due to independence issues or missing information?
The article has another, perhaps even more interesting, finding. And it's something that us personnel research nerds like to argue over: corrections. Things like correcting for range restriction (e.g., because you only hire people that get the top scores) and criterion unreliability (e.g, because supervisor performance ratings are less than perfect). For every "corrections were necessary to understand the conceptual-level relationships" you'll get a "that's great, but in the real world the data's the data."
So that's why it's fairly amusing to note the differences these types of corrections tended to make. Again, in the authors' own words:
"across 138 meta-analyses reporting approximately 3,000 effect sizes, the use of corrections improves the resulting meta-analytically derived effect size by about .05 correlation coefficient units (i.e., from about .21 to about .25). Stated differently, implementing corrections for statistical and methodological artifacts improves our knowledge of variance in outcomes by about only 25%."
So are all going to stop arguing and hold hands? Not likely. For a variety of reasons, including the fact that opinions are hard to change, and social science researchers are not immune. You could also argue that in some cases an increase from .21 to .25 has significant implications--and to be sure that's true. But I agree with the authors that the number of cases where this greatly increases the practical usefulness of a theory is small.
Does this mean we should forget about refining our results to make them more accurate? Absolutely not. It just means that our current approaches may not warrant a lot of hand-wringing. It also means we should focus on high-quality primary studies to avoid the garbage-in, garbage-out problem.
So let's take a deep breath, congratulate ourselves for a productive year, and look forward to 2011, which I'm sure will bring with it all kinds of exciting and provoking developments. Thanks for reading.
By the way, you can access the article (at least right now) here.
1 comment:
Nice post.I agree with the authors that the number of cases where this greatly increases the practical usefulness of a theory is small.
Post a Comment