Friday, September 24, 2010

September mega research update


I'm behind in posting updates of the September journals. Really behind. So instead of posting a series describing the detailed contents of each issue, I'm going to give you links to what's come out in the last month or so and let you explore. But I'll try to hit some high points:

Journal of Applied Psychology, v95(5)

Highlights include this piece from Ramesh and Gelfand who studied call center employees in the U.S. and India and found that while person-job fit predicted turnover in the U.S., person-organization fit predicted turnover in India.


Human Performance, v23(4)

This issue has some good stuff, including Converse et al.'s piece on different forced-choice formats for reducing faking on personality measures, Perry et al.'s piece on better predicting task performance using the achievement facet of conscientiousness, and Zimmerman et al.'s article on observer ratings of performance.


Journal of Business and Psychology, v25(3)

Another issue filled with lots of good stuff, but I'm almost 100% positive abstract views are session based, so use the title link for article on response rates in organizational research (full text here), the importance of using multiple recruiting activities, and the importance of communicating benefit information in job ads.


Journal of Applied Social Psychology, v40(9)

The article to check out here is by Proost et al. and deals with different self-promotion techniques during an interview and their effect on interviewer judgments.


Journal of Occupational and Organizational Psychology, v83(3)

Check this issue out for articles on organizational attraction, communication apprehension in assessment centers, and the impact of interviewer affectivity on ratings.

Saturday, September 18, 2010

Every once in a while, an idea comes along...


Once in a while a research article comes along that revolutionizes or galvanizes the field of personnel assessment. Barrick & Mount's 1991 meta-analysis of personality testing. Schmidt & Hunter's 1998 meta-analysis of selection methods. Sometimes a publication is immediately recognized for its importance. Sometimes the impact of the study or article isn't recognized until years after its publication.

The September 2010 issue of Industrial and Organizational Psychology contains an article that I believe has the potential to have a resounding, critical impact for years to come. Will it? Only time will tell.

The article in question is by Johnson, et al. and is on its face a summary of the concept of synthetic validation and champions its use. As a refresher, synthetic validation is the process of inferring or estimating validity based on the relationship between components of a job and tests of the KSAs needed to perform those components. It differs from traditional criterion-related validation in that the statistics generated are not based on a local study of the relationship between test scores and job performance. Studies have shown that estimates based on synthetic validity closely correspond to local validation studies as well as meta-analytic VG estimates. Hence it has the potential to be as useful as criterion-related validation in generating estimates of, for example, cost savings, without requiring the organization to actually gather sometimes elusive data.

But the impact of the article, if I'm right, will not be felt based on its summary of the concept, but on what it proposes: a giant database containing performance ratings, scores from selection tests, and job analysis information. This database has the potential to radically change how tests are developed and administered. I'll let the authors explain:

"Once the synthetic validity system is fully operational, new selection systems will be significantly easier to create than with a traditional validation approach. It would take approximately 1-2 hours in total; employers or trained job analysts just need to describe the target job using the job analysis questionnaire. After this point, the synthetic validity algorithms take over and automatically generate a ready-made full selection system, more accurately than can be achieved with most traditional criterion-related validation studies."

Sound like a mission to Mars? Maybe. But the authors are incredibly optimistic about the chances for such a system, and it appears that it is already in the beginning stages of development. The commentaries following this focal article are generally very positive about the idea, some authors even committing resources to the project. The authors respond by suggesting that SIOP initiate the database and link it to O*NET. They point out, correctly, that this project has the potential to radically improve the macro-level efficiency of matching jobs to people; imagine how much more productive a society would be if the people with the right skills were systematically matched with jobs requiring those skills.

So as you can probably tell, I think this is pretty exciting, and I'm looking forward to seeing where it goes.

I should mention there is another focal article and subsequent commentaries in this issue of IOP, but it's (in my humble opinion) not nearly as significant. Ryan & Ford provide an interesting discussion of the ongoing identity crisis being experienced by the field of I/O psychology, demonstrated most recently by the practically tie vote over SIOP's name. I found two things of particular interest: first, the fact that they come out of the gate using the term "organizational psychology" which deserves only a footnote (a fact pointed out by several commentary authors). Second, they take an interesting approach to presenting several possible futures for the field, from the strengthening of historic identity to "identicide."

Finally, I want to make sure everyone knows about the published results of a major task force that looked at adverse impact. It too has the potential to have a significant impact on the study and legal judgment of this sticky (and persistent) issue.

Friday, September 10, 2010

Personnel Psychology, August 2010

The August, 2010 issue of Personnel Psychology came out a while ago, so I'm overdue in taking a look at some of the content:

Greguras and Diefendorff write about their study of how "proactive personality" predicts work and life outcomes. Using data from 165 employees and their supervisors across three time periods, the authors found that proactive individuals were more likely to set and attain goals, which itself predicted psychological need satisfaction. It was the latter that then predicted job performance and OCBs as well as life satisfaction.

Speaking of personality, next is an interesting study by Ferris et al. that attempts to clarify the relationship between self-esteem and job performance. Using multisource ratings across two samples of working adults, the authors found that the importance participants placed on work performance to their self-esteem moderated this relationship. In other words, this suggests that whether self-esteem predicts job performance depends on the extent to which people's self-esteem exists outside of their performance. Interesting.

Lang et al. describe the results of a relative importance analysis of GMA compared to seven narrower cognitive abilities (using Thurstone's primary mental abilities). Using meta-analysis data, the authors found that while GMA accounted for between 10 and 28% of the variance in job performance, it was not consistently the strongest predictor. Add this study to a number of previous ones suggesting that one solution to the validity-adverse impact dilemma may be in part to use narrower cognitive abilities (e.g., verbal comprehension, reasoning).

Last but definitely not least, Johnson and Carter write about a large study of synthetic validity (a topic Johnson writes more about in the August issue of IOP). For those that need a reminder, synthetic validity is the process of inferring validity rather than directly analyzing predictor-criteria relationships. After analyzing a fairly large sample, the authors found that synthetic validity coefficients were very close to traditional validity coefficients--in fact within the bounds of sampling error for all eleven job families studied. Validity coefficients were highest when both predictors and criterion measures were weighted appropriately.

So what the heck does that mean? Essentially this provides support for employers (or researchers) who lack the resources to conduct a full-blown criterion validation study but are looking for either (a) a logical way to create selection processes that do a good job predicting performance, or (b) support for said tests. Good stuff.

Monday, September 06, 2010

Catbert tackles HR initiatives

In honor of Labor Day in the U.S., let's take a humor break from research and high-tech developments. In case you're not a regular Dilbert reader, Catbert (Evil Director of Human Resources) has recently gotten involved in three popular HR initiatives, with varying levels of success:

Workforce skill assessment ("strengths" fans take note)

Internal promotions

Employee surveys