Monday, October 11, 2010

Q&A with Piers Steel: Part 1

A few weeks ago I wrote about a research article that I think proposes a revolutionary idea: The creation of synthetic validity database that would generate ready-made selection systems that would rival or exceed the results generated through a traditional criterion validation study.

I had the opportunity to connect with one of the articles authors, Piers Steel, Associate Professor of Human Resources and Organizational Dynamics at the University of Calgary. Piers is passionate about the proposal and believes strongly that the science of selection has reached a ceiling. I wanted to dig deeper and get some details, so I posed some follow-up questions to him. Read on for the first set of questions, and I'll post the rest next time:

Q1) What is the typical state of today's selection system--what do we do well, and what don't we?

A1) Here is quote from well a respected selection journal, Personnel Psychology: “Psychological services are being offered for sale in all parts of the United States. Some of these are bona fide services by competent, well-trained people. Others are marketing nothing but glittering generalities having no practical value.... The old Roman saying runs, Caveat emptor--let the buyer beware. This holds for personnel testing devices, especially as regard to personality tests.”

Care to try and date it? It is from the article “The Gullibility of Personnel Managers,” published in 1958. Did you guess a different date? You might of, as the observation is as relevant today as yesterday -- nothing fundamental has changed. Just compare that with this more recent 2005 excerpt from HR Magazine, Personality Counts: “Personality has a long, rich tradition in business assessment,” says David Pfenninger, CEO of the Performance Assessment Network Inc. “It’s safe, logical and time-honored. But there has been a proliferation of pseudo tests on the market: Caveat emptor.”

Selection is typically terrible with good being the exception. The biggest reason is that top-notch selection systems are financially viable only for large companies with a high-volume position. Large companies can justify the $75,000 cost and months to develop and validate and perhaps, if they are lucky, have the in-house expertise to identify a good product. Most other employers don’t the skill to differentiate the good from the bad as both look the same when confronted with nearly identical glossy brochures and slick websites. And then the majority of hires are done with a regular unstructured job interview – it is the only thing employers have the time and resources to implement. Interviews alone are better than nothing but not much better – candidates are typically better at deceiving the interviewer than the interviewer is at revealing the candidate.

The system we have right now can’t even be described as being broken. That implies it once worked or could be fixed. Though ideally we could do good selection, typically, it is next to useless, right up there with graphology, which about a fifth of professional recruiters still use during their selection process. For example, Nick Corcodilos reviews how effective internet job sites are getting people a position. He asks us to consider “is it a fraud?”

Q2) What's keeping us from getting better?

A2) Well, there are a lot of things. First, sales and marketing works, even if the product doesn’t. When you have a technical product and untechnical employer or HR office, you have a lot of room for abuse. I keep hearing calls for more education and that management should care more. You are right they should care more and know more. People should also care and know more about their retirement funds as well. Neither is going to change much.

Second, the unstructured job interview has a lot of “truthiness” to it. Every professional selection expert I know includes a job interview component to the process even when it doesn’t do much, as the employer simply won’t accept the results of the selection system without it. There are some cases where people “have the touch” and are value added but this is the exception. Still, everyone thinks they are gifted, discerning, and thorough. This is the classic competition between clinical and statistical prediction, with evidence massively favoring the superiority of the latter over the former but people still preferring the former over the latter (here are few cites to show I’m not lying, as if you are like everyone else, you won’t believe me: Grove, 2005; Kuncel, Klieger, Connelly, & Ones, 2008).

Third, it just costs too much and takes too much time to do it right. Also, most jobs aren’t really large enough to do any criterion validation.

Q3) What might the future look like if we used the promise of synthetic validity?

A3) Well, to quote an article John Kammeyer-Mueller and I wrote, our selection systems would be "inexpensive, fast, high-quality, legally defensible, and easily administered.” Furthermore, every year they would noticeable improve, just like computers and cars. A person would have their profile taken and updated whenever they want, with initial assessments done online and more involved ones conducted in assessment centers. Once they have the profile, they would get a list of jobs they would likely be good at, ones that they would be likely good at and enjoy, and ones they would be likely good at, enjoy and that are in demand.

Furthermore, using the magic of person-organization fit, you inform them what type of organization they would like to work for. If someone submitted their profile to a job database, every day job positions would come to them automatically, with the likelihood of them succeeding at it. These jobs would come in their morning email if they wanted it. Organizations would also automatically receive appropriate job applicants and a ready built selection system to confirm that the profile submitted by the applicant was accurate.

Essentially, we would efficiently match people to jobs and jobs to people. I would recommend people update their profile as they get older or go through a major life change to improve the accuracy of the system, but even initially it would be far more accurate than anything available today -- a true game changer.

Follow-up: Some might see a contradiction here. You cite an article that bashes internet-based job matching, yet this is what you're suggesting. Would your system be more effective or simply supplement traditional recruiting methods (e.g., referrals)?

A: Yup, we can do better. The internet is just a delivery mechanism and no matter how high-speed and video enabled, it is just delivering the same crap. This would provide any attempt to match people to jobs or jobs to people with the highest possible predictiveness.

Next time: Q&A Part 2

Grove, W. M. (2005). Clinical versus statistical prediction: The contribution of Paul E. Meehl. Journal of Clinical Psychology, 61(10), 1233-1243. doi: 10.1002/jclp.20179

Kuncel, N. R., Klieger, D., Connelly, B., & Ones, D. S. (2008, April). Mechanical versus clinical data combination in I/O psychology. In I. H. Kwaske (Chair), Individual Assessment: Does the research support the practice? Symposium conducted at the annual meeting of the Society for Industrial and Organizational Psychology, San Francisco, CA.

Stagner, R. (1958). The Gullibility of Personnel Managers. Personnel Psychology, 11(3), 347-352.

No comments: