The September issue of the International Journal of Selection and Assessment (IJSA) is out with a boatload of content. Let's check out some of the highlights:
First up, a piece by Gentry, et al. that has implications for self-rating instruments. The authors studied self-observer ratings among managers in Southern Asia and Confucian Asia and found an important difference: the discrepancy between the ratings was greater in Southern Asia. Specifically, the difference appears in self-ratings rather than observer ratings, indicating differences in how managers in the different areas perceived themselves. Implication? Differences in self ratings may be due to cultural differences in addition to things like personality and instrument type.
The second article is a fascinating one by Saul Fine in which the author analyzed differences in integrity test scores across 27 countries. Fine found two important things: first, there are significant differences in test scores across countries. Second, test results were significantly correlated (r= -.48) with country-level measures of corruption as well as several aspects of Hofstede's cultural dimensions.
Next, an article by De Corte, et al. that describes a method for creating Pareto-optimal selection systems that balance validity, adverse impact, and predictor constraints. This article continues the quest for balancing utility and subgroup differences. A link to the article is here but it wasn't functional at the time I wrote this; hopefully it will be soon.
Next, in an article that SHRM will probably place on their homepage if they haven't already, Lester et al. studied alumni from three U.S. universities to analyze the relationship between attainment of the Professional in Human Resources (PHR) certification offered by SHRM and early career success. Results? Those with a PHR were significantly more likely to obtain a job in HR (versus another field) BUT possession was not associated with starting salary or early career promotions. I'll let you decide if you think it's worth the time (and expense).
If you need another reason to focus on work samples and structured interviews, here ya go. Anderson, et al. provide us with the results of a meta-analysis of applicant reactions to selection instruments. Drawing from data from 17 countries, the authors found results similar to what we've seen in the past: work samples and interviews were most preferred, while honesty testing, personal contacts, and graphology were the least preferred. In the middle (favorably evaluated) were resumes, cognitive tests, references, biodata, and personality inventories.
Fans of biodata and personality testing may find the article by Sisco & Reilly reassuring. Using results from over 700 participants, the authors found that the factor structures of a personality inventory and biodata measure were not significantly impacted by social desirability at the item level. Implication? The measures seemed to hold together and retain at least an aspect of their construct validity even in the face of items that beg inflation.
Speaking of personality tests, Whetzel et al. investigated the linearity of the relationship between the OPQ and job performance. Results? Very little departure from linearity and where present the departure was small. This suggests that utility gains may be obtained across the spectrum of personality test results.
Are you overloading your assessment center raters? Melchers et al. present the results of a study that strongly suggests that if you are using group discussions as an assessment tool, you need to be sensitive to the number of participants that raters are simultaneously observing.
There are other articles in here you may be interested in, including ones on organizational attractiveness, range shrinkage in cognitive ability test scores, and staffing services related to innovation.
No comments:
Post a Comment